How does our brain organize the visual information that our eyes capture? Researchers at the University of California, Berkeley, used computational models of brain imaging data to answer this question and arrived at what they call “continuous semantic space” – a notion which serves as the basis for the first interactive maps showing how the brain categorizes what we see.
The data on which the maps are based was collected while the subjects watched movie clips. Brain activity was recorded via functional Magnetic Resonance Imaging (fMRI), a type of MRI that measures brain activity by detecting related changes in blood flow. In order to find the correlations in the data collected, the researchers used a type of analysis known as regularized linear regression. Based on the results of that analysis, they built a model showing how each of the 30,000 or so locations in the cortex responded to each one of the 1,700 categories of objects and actions seen in the movie clips. Finally, another analytical method called principal components analysis was used to find the "semantic space" common to the all the study subjects.
The maps show more than 1,700 visual categories and their relationships to one another, with color coding used to indicate categories that activate the same brain areas. In some cases, the relationships between categories made sense (for example, humans and animals), but in other categories, they were less obvious (such as hallways and buckets). The researchers also found that different people shared a similar semantic layout.
Traditionally, scientists have assumed that categories of objects or actions humans see (people, animals, movements etc.) are represented in a separate region of the visual cortex, the part of the cerebral cortex responsible for processing visual information. In this study, however, it was found that the process is much broader with categories represented in highly organized maps that overlap each other and cover up to 20 percent of the brain. This includes somatosensory (receptors for sensory modalities such as touch and pain) and frontal cortices, which determine a wide range of behavior types and functions.
The research could potentially be used to aid in medical diagnosis and to treat brain disorders, as well as being applied in the field of brain-machine interfaces – especially for facial and other image recognition systems, increasingly used in border control set-ups. It could also be used to make self-checkout systems more efficient in recognizing goods.
“Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organized," said Alexander Huth, a doctoral student in neuroscience at UC Berkeley and lead author of the study.
The team has produced an online brain viewer to display the findings.
The study was published in the December 19 edition of the journal Neuron.
Alexander Huth explains the method and results of his team's experiment in the video below.
Source: Berkeley