Medical Devices

Implant uses brain signals to decode what people are trying to say

Implant uses brain signals to decode what people are trying to say
Researchers have developed a sensor that records brain signals to decode what people are trying to say
Researchers have developed a sensor that records brain signals to decode what people are trying to say
View 2 Images
Researchers have developed a sensor that records brain signals to decode what people are trying to say
1/2
Researchers have developed a sensor that records brain signals to decode what people are trying to say
Compared to other devices (left), Duke's device has twice as many sensors and is smaller
2/2
Compared to other devices (left), Duke's device has twice as many sensors and is much smaller

Researchers have developed a high-resolution sensor that records brain signals to decode what people are trying to say. While still early days, the device may provide people who’ve lost speech due to neurodegenerative disease with the ability to communicate.

Losing the ability to communicate can be a side-effect of debilitating neurodegenerative diseases like amyotrophic lateral sclerosis (ALS), where cognitive function is preserved, but the muscles that control speech become weak and tight. One solution to restore communication is decoding signals directly from the brain’s motor cortex, which triggers muscle movements in a specific order to produce different sounds.

Researchers from Duke University in the US have developed a brain implant that uses high-resolution neural recordings to decode a person’s brain signals, translating them into what they’re trying to say.

“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, one of the study’s corresponding authors. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”

Currently, the best speech decoding rate is about 78 words per minute, while we speak at around 150 words per minute. The lag is generally attributable to the number of brain activity sensors used; fewer sensors mean less decipherable information to decode.

To improve upon existing devices, the researchers packed 256 microscopic brain sensors onto a postage-stamp-sized piece of flexible medical-grade plastic, which meant the device was able to obtain higher-quality neural signals with greater spatial resolution. Despite their closeness, neurons only microns apart can have very different activity patterns when coordinating speech. Making accurate predictions about what someone intends to say requires distinguishing signals from neighboring brain cells.

Compared to other devices (left), Duke's device has twice as many sensors and is smaller
Compared to other devices (left), Duke's device has twice as many sensors and is much smaller

The implant then needed to be tested. The researchers recruited four patients who were already undergoing brain surgery to treat Parkinson’s disease or to have a tumor removed and interrupted their surgery – briefly – to use the implant on them.

“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’ we rushed into action and the patient performed the task.”

The task was a simple one. Participants heard a series of nonsense words like “ava,” “kug,” or “vip” and spoke each one aloud. The implant recorded the activity in the patient’s motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and voice box (larynx). The data was then fed into a machine learning algorithm to see how accurately it could predict the sound being made, based only on the brain activity recordings.

For some sounds and participants, like "g" in the word “gak,” the decoder was right 84% of the time when it was the first sound in a string of three that made up a particular nonsense word. The accuracy fell as the decoder parsed out sounds in the middle or at the end of the word and struggled if two sounds were similar, like "p" and "b".

Overall, the decoder was accurate 40% of the time. Although it doesn’t sound particularly impressive, the researchers note that the algorithm was using only 90 seconds of spoken data taken during the 15-minute test.

The researchers will continue to improve the device’s accuracy and decoding speed and are using a grant from the National Institutes of Health (NIH) to work on a cordless version.

“We’re now developing the same kind of recording devices, but without any wires,” said Cogan. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.”

The study was published in the journal Nature Communications.

Source: Duke University

1 comment
1 comment
MarylandUSA
This will be a godsend for every married man.