A new study conducted by Brown University researchers has furthered our understanding of how the brain formulates a plan for picking up an object. In the long run, the findings could pave the way for more accomplished mind-controlled robotic prostheses.
When designing a thought-controlled prosthesis, a strong understanding of the neural patterns that signal commands is key, but our understanding of the process is limited. While previous research has worked to identify the neural activity connected with different types of grip and hand positions, they are usually studied in too insular a way, looking at individual objects and the grip type associated with them.
The researchers aimed to expand things with the new study, looking at how the brain formulates plans to grip the same object in different ways, or different objects in the same way.
To do so, they recorded and analyzed the neural activity in the ventral premotor cortex of three trained rhesus macaques. The monkeys were handed different objects, and told how to grip them via colored lights, with the patterns of neural activity recorded at every stage of the process.
The findings show that the brain has multiple ways of formulating grip commands, and they're generally influenced by the object that's being gripped. The analysis technique was designed to detect patterns of activity without relying on pre-existing assumptions about neural activity.
Identifiable patterns were detected as soon as the object was shown to the animal, and by the time the grip occurred, it was possible to see four distinct object-grip combinations. These were identifiable with an accuracy of 95 percent.
The study suggests that the formulation of a grip plan occurs earlier in the cognitive process of picking up an object than was previously thought. This knowledge could lead to brain-computer interfaces that are able to instruct prostheses more quickly and accurately, improving their overall effectiveness.
"You can have the same movement resulting from very different activity patterns within the context of different objects," says study author Carlos Vargas-Irwin. "If we are trying to build a [brain-computer interface] decoder we need to take into account the bigger context of what the target of the movement is."
The Brown University team intends to continue its research, working to establish how general the findings are by seeing if they apply to a wider selection of objects.
The study results were published in the Journal of Neuroscience.
Source: Brown University
Want a cleaner, faster loading and ad free reading experience?
Try New Atlas Plus. Learn more