Amazon’s Alexa will soon be able to recreate the voices of dead people

Amazon’s Alexa will soon be able to recreate the voices of dead people

Amazon s Alexa might soon replicate the voice of family members - even if they are dead.

The capability was unveiled at Amazon's Re: Mars conference in Las Vegas and would allow the virtual assistant to mimic the voice of a specific person based on a less than a minute of provided recording.

Rohit Prasad, senior vice president and head scientist for Alexa, said at the event Wednesday that the feature was intended to build greater trust in the interactions users have with Alexa by putting more human attributes of empathy and affect. Prasad said these attributes have become even more important during the ongoing epidemic when so many of us have lost ones that we love. It can make their memories last, even though AI can't eliminate the pain of loss. In a video played by Amazon at the event, a young child asks Alexa: Can Grandma finish reading me the Wizard of Oz? Alexa acknowledges the request and switches to another voice mimicking the grandmother of the child. The voice assistant continues to read the book in the same voice.

Prasad said that the company had to learn how to make a high-quality voice with a shorter recording, rather than hours of recording in a studio. Amazon hasn't provided any more details about the feature, which is bound to spark privacy concerns and ethical questions about consent.

Amazon pushed for more participation by the speaker whose voice is recreated as competitor Microsoft said earlier this week it was scaling back its synthetic voice offerings and setting stricter guidelines to ensure the active participation of the speaker whose voice is recreated. Microsoft said Tuesday it is limiting which customers can use the service, while also highlighting acceptable uses such as an interactive Bugs Bunny character at AT&T stores.

This technology has exciting potential in education, accessibility, and entertainment, but it is easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners, according to a blog post by Natasha Crampton, who heads Microsoft's AI ethics division.