Today in New Delhi, India
May 16, 2019-Thursday
-°C
New Delhi
  • Humidity
    -
  • Wind
    -

New technology can restore voices, finds study

The new system being developed demonstrates that it is possible to create a synthesised version of a person’s voice that can be controlled by the activity of their brain’s speech centers.

wellness Updated: Apr 27, 2019 13:46 IST
Asian News International
Asian News International
Asian News International
natural-sounding speech,neurological damage,Study
According to a recent study, we can generate entire spoken sentences based on an individual’s brain activity.(Unsplash)


A recent study claims to have developed a machine interface that can generate natural-sounding speech by using an anatomically detailed computer simulation technology.

The development is expected to restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

The details were published in the journal of Nature. Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis often result in an irreversible loss of the ability to speak.

Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements.

However, producing text or synthesised speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed demonstrates that it is possible to create a synthesised version of a person’s voice that can be controlled by the activity of their brain’s speech centers.

In the future, this approach could not only restore fluent communication to individuals with a severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity. This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss,” said Professor Chang.

The researchers explained how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one. We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals,” said speech scientist Anumanchipalli said.

In their new study, patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery -- to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds -- pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity.

This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains. We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract,” said student Chartier.

Follow more stories on Facebook and Twitter

First Published: Apr 27, 2019 13:46 IST