Computers

Thought-to-text chip smaller than Neuralink achieves 91% accuracy

Thought-to-text chip smaller than Neuralink achieves 91% accuracy
The new super-small MiBMI chip with a pair of tweezers and alligator clip for comparison
The new super-small MiBMI chip with a pair of tweezers and alligator clip for comparison
View 1 Image
The new super-small MiBMI chip with a pair of tweezers and alligator clip for comparison
1/1
The new super-small MiBMI chip with a pair of tweezers and alligator clip for comparison

The brain-machine interface race is on. While Elon Musk's Neuralink has garnered most of the headlines in this field, a new small and thin chip out of Switzerland makes it look downright clunky by comparison. It also works impressively well.

The chip has been developed by researchers at the Ecole Polytechnique Federale de Lausanne (EPFL) and represents a leap forward in the sizzling space of brain-machine-interfaces (BMIs) – devices that are able to read activity in the brain and translate it into real-world output such as text on a screen. That's because this particular device – known as a miniaturized brain-machine interface (MiBMI) – is extremely small, consisting of two thin chips measuring just 8 mm2 total. By comparison, Elon Musk's Neuralink device clocks in at comparatively gargantuan size of about 23 x 8 mm (about 0.3 x .9 in).

Additionally, the EPFL chipset uses very little power, is reported to be minimally invasive, and consists of a fully integrated system that processes data in real time. That's different from Neuralink, which requires the insertion of 64 electrodes into the brain and carries out its processing via an app located on a device outside of the brain.

"MiBMI allows us to convert intricate neural activity into readable text with high accuracy and low power consumption," said Mahsa Shoaran who heads EPFL's Integrated Neurotechnologies Laboratory. "This advancement brings us closer to practical, implantable solutions that can significantly enhance communication abilities for individuals with severe motor impairments."

Like other BMIs, the new chip basically monitors electrical activity in the brain and, armed with datasets from previous brain-monitoring efforts, converts that activity into an output. In this case, the MiBMI is able to read brain signals that are formed as someone imagines drawing a letter and output those signals as text.

Unlike the Neuralink device which was implanted in a human patient earlier this year, the new MiBMI chip has yet to be tested in a live setting. However, it was fed real-time neural recordings gathered from previous brain interface tests and achieved a 91% accuracy rate in converting neural activity into actual text.

Neural shorthand

Part of the success of the new chip lies in a new way of reading the language processing cues sent by the brain. While working on their chip, the EPFL researchers found a series of very specific neural markers that fire when a patient imagined writing each letter. They termed these markers distinctive neural codes, or DNCs.

The DNCs then became a sort of shorthand for each letter which allows the MiBMI chipset to only have to process the markers themselves. These clock in about a hundred bytes each, instead of the typical thousands of bytes of neural data associated with the imagining of each letter. This was a major factor in allowing the chips to achieve their work in less space using less energy. The researchers say the DNC system will also lead to a reduction in training time for individuals fitted with the chip.

The MiBMI is currently able to decode 31 different characters which, the researchers say, is a record for similar integrated systems. They feel they can eventually get the system to decode up to 100 different characters.

As with other BMIs, the EPFL chip is seen as a way to bring the ability to communicate to those who can't, such as people suffering from ALS or other severe motor impairments. The researchers are currently exploring other uses for their system that could go beyond text processing.

"We are collaborating with other research groups to test the system in different contexts, such as speech decoding and movement control," says Shoaran. "Our goal is to develop a versatile BMI that can be tailored to various neurological disorders, providing a broader range of solutions for patients."

The research that led to the new MiBMI's creation has been described in a paper published in the latest issue of the IEEE Journal of Solid-State Circuits.

Source: EPFL

6 comments
6 comments
frb
seems more like ocr... brain-character-recognition in this case

also not really a fair comparison with the neuralink device which is actually implanted in a human brain and thus includes battery and wireless interface and is read and write....
White Rabbit
@frb Optical Character Recognition requires the optical device, its supporting hardware&software, AND the physical presence of the characters.
Brain CR (as performed by Neuralink) "requires the insertion of 64 electrodes into the brain", the installation "by a high-precision surgical robot" of a device measuring 184 sq.mm., external hardware&software, AND the metaphysical presence of full characters each requiring several thousands of bytes of neural data.
MiBMI requires the "minimally invasive" installation of 2 chips occupying 8 sq.mm., no external processing, AND the metaphysical presence of only the DNCs each occupying "about a hundred bytes"of neural data.
In these ways I'm not sure that "BCR" adequately represents the distance between the MiBMI and OCR.
While it's true that the MiBMI has not yet been installed in a human, I think it's wise that a comprehensive comparison include reference to the "more than 1,500 animal deaths and excessive, unnecessary cruelty at the startup" of Neuralink testing.

There is one other significant difference. It's claimed that Neuralink has the abilities to Read and Write. I would argue that a machine that can write to the brain, i.e. create thoughts, is something that should rightfully give us nightmares!
ANTIcarrot
@White Rabbit
'Write' as supply information to the brain. Information like sight, or hearing, or a sense of touch and motion. Or is helping disabled people really the stuff of nightmares for you?
Treon Verdery
I am very strongly in favor of nonsentient artificial intelligence and computer brain interfacing. This version is more minute but onscreen it looks improvable Make figure 8 shaped holes, or other holes in it once each 0.7mm, that is because human tissue culture brain organoids thrive more at 1mm diameter, so 1-3mm is about as far from capillaries as solid neural tissue still thrives at. The actual brain being sensed still has capillaries, but if the interface causes a callous, or scar tissue or pull-back the cytes it taps to receive/send messaging might remain living, and communicating. Also the implant has 90 degree angle corners, when they place the item that might cause gauging or noncompatibility with tissue curves, also, It is likely the creators of this particular brain interface know about it, but gallium metal, element Ce, and phosphatidyl chemical moiety reduce immunoreaction/seeping fluid/swelling/phagocytosis. But that is that the body,nonsentient Artificial Intelligence ChatGPT says 40-80% improvement occurs at tissue culture with Ga, Ce, phosphatidyl moieties
Techutante
I expect we can do this through the scalp in the near future. I've already used (and own 2 generations of) an Emotive EPOC device which is a head mounted sensor array that can detect your thoughts. This device is over 10 years old at this point, and given the speed of technology increase I think off the shelf parts can do better now.

With a small amount (about an hour) of practice I could use it to move a mouse cursor around, rotate a 3 dimensional object, and hide or display objects that I focus on.

The idea that we need to breach the skull to get this information is probably outmoded. Restoring vision or other senses to the brain probably could be done similarly with slightly more power output and directed AI control.
White Rabbit
@ANTIcarrot
1) MiBMI development has a similar goal. "Our goal is to develop a versatile BMI that can be tailored to various neurological disorders, providing a broader range of solutions for patients." (Mahsa Shoaran, EPFL) The contrasts with OCR and BCR were intended to show how far the technology had come, and the comparison with Neuralink was an attempt to show that there many reasonable grounds for such a comparison.
2) Being supplied mediated data about "sight, or hearing, or a sense of touch and motion" sounds a lot like having VR without the gloves, goggles, and other paraphernalia. Note, however, that these peripherals need to communicate with several different areas of the brain - at least according to early sketches from Neuralink. As far as I can tell, MiBMI deals only with the prefrontal cortex and espouses much more modest goals. "This technology holds the potential to significantly improve the quality of life for patients with conditions such as amyotrophic lateral sclerosis (ALS) and spinal cord injuries."
3) If having thoughts "written to" the brain doesn't make you even a little uncomfortable, imagine how you would feel if someone with a worldview very different from yours decided on what thoughts to implant, or how the perceptual data should be mediated. Would you not prefer not to have the equivalent of "rose colored glasses" implanted in your brain?