File this in the “yes that’s very clever of you, but please don’t” pile.
Amazon has demonstrated an experimental feature that demonstrates how a child can choose to have a bedside story read to him by his Alexa… using his dead grandmother’s voice.
Rohit Prasad, Amazon’s head scientist for Alexa AI, told attendees of its annual MARS conference that:
“…in these times of the ongoing pandemic, so many of us have lost someone we love. While AI can’t eliminate that pain of loss, it can definitely make their memories last.”
Amazon says that its AI systems can learn how to mimic someone’s voice from just a single minute’s worth of recorded audio.
Which seems a little creepy to me. Because chances are that many of us have much more of our voices than that recorded somewhere – whether it be on voicemails, videos, or – oops! – podcasts.
And your voice may not just be used to console little Benny at bedtime. It might also be abused to unlock your smartphone or to speak to HMRC.
Thankfully there is no suggestion (yet) that Amazon is going to release this functionality to the wider world. But give them time, give them time.
Amazon is far from the only company with the smarts to pretty convincingly mimic someone’s voice from just a small snatch of audio, but that doesn’t mean it’s a cool thing to do. And there are so so many ways in which it could be abused…
So, what’s the solution? How can we stop people using deepfaked versions of our voice without our permission?
I’m not sure we can. Maybe it would be cool if the boffins at Amazon thought about how to solve that problem instead of teaching Alexa to read “The Wizard of Oz” using the voice of a dead woman.
I am sure this is an upcoming "Feature" for Alexa powered devices. Nothing to worry about…