Neuroscientists have developed a brain-machine interface which uses brain activity to control virtual vocal track which is a computer simulation of lips, jaws, tongue and larynx to create a natural sounding synthetic speech.

While previous efforts to do this we mildly successful at best, the scientists this time hypothesized that the speech centers do not code the acoustic properties of sound, but rather coordinate the movement of the mouth, tongue and throat during speech. Turned out to be the correct hypothesis!!

This tech could be the first step towards restoring voices in people who have lost the ability to speak during to paralysis or other impairments. Still, this tech has a success rate of approximately 50% and hence needs more fine tuning.

Read the full story: UCSF
Scientific publication: Nature

Related Articles