Computers

Facebook-backed brain research decodes thoughts as words instantly for the first time

View 2 Images
The University of California San Francisco's Eddie Chang (right) and David Moses, who helped develop the new real-time brain signal translation technology
Noah Berger
The electrode implants rest on the surface of the brain and record its electrical activity
Noah Berger
The University of California San Francisco's Eddie Chang (right) and David Moses, who helped develop the new real-time brain signal translation technology
Noah Berger

Machines that can connect to our brains and interpret its electrical signals hold all kinds of potential in all kinds of areas. A perfect demonstration of this is a research partnership between scientists at the University of California San Francisco, who want to help disabled patients speak again, and Facebook, which is pursuing technologies that would allow the general population to type words with their mind. This collaboration has now produced an exciting breakthrough, demonstrating technology that for the first time, decodes brainwaves as words and phrases in real time.

For healthy people (and for better or for worse), there isn't a huge leap between the thoughts we'd like to share and the words that come out of our mouths. But it is a very different story for people who have suffered a stroke, a spinal cord injury or a brain disease that can impede their ability to speak.

A great deal of research into brain-machine interfaces (BMIs) focuses on giving these people their voices back, and scientists have been at it for quite a while. Way back in 2006, for example, we reported on a "mental typewriter," a BMI that demonstrated how brain signals could be converted into control inputs for a keyboard, and no shortage of promising advances have taken place since then.

But researchers at the University of California San Francisco (UCSF) have made a breakthrough they say represents a first for the field. It builds on an advance the team made earlier in the year, where it developed a brain implant that could turn signals into synthesized speech. One drawback, however, was that the scientists needed weeks or months to carry out the translations. Now, the team says the technology is capable of decoding spoken words and phrases in real time.

The new study involved three voluntary subjects who had a small patch of electrodes implanted on the surface of their brains. The team put a series of nine questions to the volunteers, while using newly developed machine learning algorithms to turn their brain signaling into the building blocks of speech.

The electrode implants rest on the surface of the brain and record its electrical activity
Noah Berger

As with the earlier research, this focuses on the electromagnetic activity associated with intended control over the jaw, lips and tongue. While patients may experience facial paralysis as a result of their injuries, the brain regions behind these movements often remain intact, which is exactly what the scientists are trying to take advantage of.

The questions posed to the participants were simple queries such as "From 0 to 10, how comfortable are you?" and "How is your room currently?" The subjects were then made to respond out loud with one of 24 predefined answers. In time, the machine learning algorithm was able to correctly determine which answer they were providing, based solely on their brain activity.

"Real-time processing of brain activity has been used to decode simple speech sounds, but this is the first time this approach has been used to identify spoken words and phrases," says UCSF postdoctoral researcher and study co-author David Moses. "It's important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate from brain activity."

Helping to speed things along was the fact that the algorithm listened out for both sides of the conversation, rather than simply the speech of the subject. By hearing the question being asked, it was able to quickly narrow down the potential responses as a result of the extra context.

"Most previous approaches have focused on decoding speech alone, but here we show the value of decoding both sides of a conversation – both the questions someone hears and what they say in response," says team member and speech neuroscientist Eddie Chang. "This reinforces our intuition that speech is not something that occurs in a vacuum and that any attempt to decode what patients with speech impairments are trying to say will be improved by taking into account the full context in which they are trying to communicate."

From Facebook's standpoint, if this kind of technology could be applied non-invasively for the general population, it could allow folks to, for example, send a text message without even taking out their phone. While our brains move quickly and generate a monumental amount of data, the fingers we use to type things out operate much more slowly. The social media giant has said it is working on a system that would allow us to type from our brains around five times faster than our humble fingers are capable of, aspirations that echo that of Elon Musk's Neuralink venture, and other researchers in the field.

The UCSF researchers, however, remain focused on the clinical potential of their technology, and will now continuing working to improve its efficiency, accuracy and range of applications. Another use would be to allow paralyzed people to control prosthetic limbs and computers via brain signaling, which is also another possibility being explored by a number of researchers around the globe.

The team's research was published in the journal Nature Communications.

Sources: University of California San Francisco, Facebook

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
0 comments
There are no comments. Be the first!