Getting Voice: New Speech Synthesis Could Make Roger Ebert Sound More Like Himself – Scientific American

Link articolo originale

Archivio di tutti i clip:
clips.quintarelli.it
(Notebook di Evernote).

Getting Voice: New Speech Synthesis Could Make Roger Ebert Sound More Like Himself

The approach to create a more authentic voice for the film critic will attempt to blend two processes: unit selection and the Hidden Markov Model Speech Synthesis System

Share on Facebook

Share on Twitter

Share on Reddit

Email

Print

Advertisement

After Roger Ebert lost the ability to speak in 2006 due to a post-cancer surgery tracheostomy, the film critic has communicated via Post-It notes, an eloquent and hilarious array of hand gestures, and his Mac laptop synthesizer. The version that read out pre-typed introductions at his annual film festival in 2009 had an upper-class English accent the British might call “emollient.” Ebert and his wife Chaz called it “Sir Laurence” and shortly thereafter replaced it with a more accessible American–accented voice called “Alex.” By next year, Ebert may sound even more like himself, courtesy of personalized voice work being carried out by the Edinburgh-based company CereProc (short for cerebral processing and pronounced “serra-prock”).
Ebert’s extensive media recordings—not the least of which is the long-running TV series At the Movies—had led many people to suggest something like this. In his autobiography, Life Itself: A Memoir (Grand Central Publishing), set to release on September 13, Ebert says the cost was prohibitive until he discovered that CereProc, a specialist in regional accents, had built personalized voices for other individuals. Its Web versions of George W. Bush and Arnold Schwarzenegger, built from found audio samples, seemed promising.
The traditional method of constructing a speech synthesizer—unit selection—involves precisely transcribing hours of recordings and breaking them up into tiny pieces engineers call “phones” that can be stitched back together in different combinations. The joins aren’t always smooth, however, creating audible artifacts.

Advertisement

“A lot of engineering work over the last 10 years is how to stop that artifact,” says Matthew Aylett, CereProc’s chief technical officer. “One way is to make the person speak in a more boring way—when there’s less variation it’s easier to join. So that has meant inevitably that within the traditional speech synthesis community the voices sound really boring.” For reading out bank balances that suffices. But “if you want to read out whole paragraphs of text or longer pieces it can get very wearing,” he adds.
CereProc’s most intractable problem has been finding good audio. Unit selection’s technical limitation is simple: garbage in, garbage out. Ebert talked plenty on his movie review programs, but was frequently interrupted and usually had a film playing on a screen behind him. The original tracks of his DVD commentaries were better, but his excitement and engagement made large parts unusable.
“It would have been easier if he had been more boring and stupid,” Aylett says. Other technical difficulties stemmed from differing microphones, equipment and room sounds. “You could hear the change mid-sentence in the first version.”
In the future CereProc wants to make personalized synthesizers scalable—that is, to automate their creation. A newer approach, called the Hidden Markov Model Speech Synthesis System (HTS), creates a statistical model of captured sounds over time and then inverts it to produce speech. Aylett compares the process to rendering graphics.
HTS has several advantages. It is more tolerant of noise and transcription errors, and requires less input.

Advertisement

“The problem at the moment with this system is that the output sounds a bit more like synthesis sounded in the 1990s,” Aylett says. But he thinks voice building must be made more efficient. “We want to do a Web service where people can record their voice and wind up with a voice automatically,” he says. The audio quality won’t be as good, but for most purposes it just needs to be understandable.
Ebert, however, would like broadcast quality, a tougher challenge that is spurring CereProc to consider a hybrid approach that uses the HTS model to select among stored phones, generating only less-common sounds that are missing or poorly represented in the database.
“It’s great to have someone prominent like this [as a test case]—it moves the technology forward for us and makes it more obvious to other people that it can be done,” Aylett says. His inner engineer has been piqued: “I just want to solve this problem.”A small sample of the work in progress debuted on Oprah last year, but the date for a finished version is still uncertain.
The time it takes to type out speech will still hamper real-time conversation. Says Aylett, “It gives you real humility as an engineer when you realize that what you’re competing against is a Post-It note.”
A final question will only be answered when Ebert’s new voice goes into service. Will it trigger “the uncanny valley,” that is, the human revulsion to robots that are the wrong amount of almost human?

Advertisement

“I doubt if it will ever be a problem,” Ebert says via e-mail, “but if it is, it’s one I’d like to have.”

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *