Award-winning helpdesk system with an inbuilt KBase, forums, canned responses & more. Try super user friendly Freshdesk today. (In 2 minutes, You'll set it up!)
In early October, CNN revealed that veteran voice actor Susan Bennett was the voice behind Siri until Apple changed it in iOS 7. Her utterances, she revealed in an interview, were being used by the tech giant (and its likely voice synthesis partner Nuance) to generate the digital assistant's own words.
Of course, even a company as technologically sophisticated as Apple is unlikely to have figured out a way to clone Ms. Bennett and place tiny copies of her inside every iPad and iPhone. Which makes for a question more fascinating than that of Siri's identity: How exactly is a person's voice transformed into a software program that can synthesize any text thrown at it?
My voice is my passport
In Sneakers, a much underrated movie that seems oddly appropriate in today's era of government spying on its own citizens, Robert Redford's ragtag team of hackers manages to bypass a sophisticated voice-based security system by splicing together individual words taped from an unsuspecting employee.
The process of giving voice to iOS's digital assistant may not be all that different, although it is far more thorough. "For a large and dynamic synthesis application, the voice talent (one or more actors) will be needed in the recording studio for anywhere from several weeks to a number of months," says veteran voice actor Scott Reyns, who is based in San Francisco. "They'll end up reading from thousands to tens of thousands of sentences so that a good amount of coverage is recorded for phrasing and intonation."
ConversionConversion EmoticonEmoticon