Science

Understanding thought: new computational modeling sheds light on how the brain works

Understanding thought: new computational modeling sheds light on how the brain works
Marcel Just and Tom M. MitchellPhoto: Carnegie Mellon
Marcel Just and Tom M. MitchellPhoto: Carnegie Mellon
View 4 Images
1/4
2/4
3/4
Marcel Just and Tom M. MitchellPhoto: Carnegie Mellon
4/4
Marcel Just and Tom M. MitchellPhoto: Carnegie Mellon
View gallery - 4 images

June 10, 2008 It's easy to forget that while humanity has made fantastic advances in our understanding of the world around us, our own brains, and how they organize information, remain largely a mystery. Scientists at Carnegie Mellon University have developed a computational model that can predict the unique brain activation patterns associated with concrete nouns – an important step in charting one of science’s last unexplored frontiers.

Understanding how and where the brain stores information is important to neuro-scientists who wish to better diagnose and study brain damage and mental conditions. By using functional magnetic resonance imaging (MRI), researchers can map out the areas of the brain that become active when a subject thinks about the qualities of a specific word. But while previous studies have mainly been restricted to just recording the appearance of various brain states, Carnegie Mellon’s groundbreaking computational model can accurately extrapolate the appearance of undocumented brain states from existing data.

The computational model is based on the notion that the sensory-motor concepts we associate with a word determine how it is stored in the brain – for example, the brain state associated with the noun “apple” and the brain state associated with the noun “pie” will both activate the brain state associated with the root concept “eat”. This depiction of the brain’s organizational structure lends itself far more easily to computational modeling, because while there may be millions of individual nouns in the English language, scientists have identified just 25 basic verbs that form the “building blocks” of our understanding. These are: see, hear, listen, taste, smell, eat, touch, rub, lift, manipulate, run, push, fill, move, ride, say, fear, open, approach, near, enter, drive, wear, break, and clean. By statistically analyzing a set of texts comprising over one trillion words, the computer was able to determine the relationships between words in the English language, and the degree to which various nouns are associated with those 25 core, sensory-motor concepts.

The study then presented nine healthy, college-age participants with 60 different word-picture pairs from 12 semantic categories, six times each. A representative fMRI image for each stimulus was created by using each participant’s average brain response to the six presentations. By referencing the database of English language, the computer was able to analyze how the relationship of nouns to a core verb affected the fMRI image. This gave researchers the scaffolding needed to construct a model that can predict what the brain activation state of an untested concrete noun would look like, by determining which core verbs it is associated with in the text corpus, and then recalling which areas of the brain are activated by those particular concepts.

To test the model, subjects were given a word originally included in the fMRI examinations, and the computer was made to predict the brain activation state. The model had a mean accuracy of 77 percent in matching the predicted activation patterns to the ones observed in the participants' brains. The findings are being published in the May 30 issue of the journal Science.

The experiment revealed that concrete nouns, or words associated with specific sensations or actions, activate neurons in the sensory-motor area of the brain relevant to that concept. For example, the noun “apple”, which is associated with the verb “eat”, will stimulate activity in the gustatory cortex, which serves as the sensory cortex for the sense of taste. Concrete nouns associated with “push” activate the right postcentral gyrus, which is linked to premotor planning, and nouns associated with “run” activate the right superior temporal sulcus, which is linked to the perception of biological motion. The words also activate areas of the brain to do with memory and planning - hence the word “apple” might light up neurons associated with the memory of eating an apple, the sensation of eating an apple, and the behavior required to obtain an apple.

The system has applications beyond helping scientists understand mental conditions. It’s possible that brain reading technology like this could one day be used as prosthetic communication devices, assisting patients who are unable to speak, facilitating remote correspondence, and perhaps even performing translation duties. Brain reading technology is also being closely examined by the gaming industry, one example being Emotiv’s EPOC.

However, don’t look for advanced brain reading devices on the market any time soon. The researchers at Carnegie Mellon University stress that their system is just a foundation. And unfortunately, advancement in the field isn’t simply a matter of building more sophisticated technology – brain readers are also held back, to a degree, by the human brain itself. Tom Mitchell, of the Machine Learning Department at the University, states, “It's not a controllable experiment. It can be hard to focus. Somewhere in the middle of that their stomach growls. And all of sudden they think, 'I'm hungry - oops.'”

Via Carnegie Mellon.

View gallery - 4 images
No comments
0 comments
There are no comments. Be the first!