Science

Scientists reconstruct visual stimuli by reading brain activity

Scientists reconstruct visual stimuli by reading brain activity
Scientists have created a system that is able to visually reconstruct images that people have seen, by reading their brain activity
Scientists have created a system that is able to visually reconstruct images that people have seen, by reading their brain activity
View 1 Image
Scientists have created a system that is able to visually reconstruct images that people have seen, by reading their brain activity
1/1
Scientists have created a system that is able to visually reconstruct images that people have seen, by reading their brain activity

In the 1983 film Brainstorm, Christopher Walken played a scientist who was able to record movies of people's mental experiences, then play them back into the minds of other people. Pretty far-fetched, right? Well, maybe not. Utilizing functional Magnetic Resonance Imaging (fMRI) and computer models, researchers at the University of California, Berkeley, have been able to visually reconstruct the brain activity of human subjects watching movie trailers - in other words, they could see what the people's brains were seeing.

The study involved placing three subjects in an MRI scanner, and having them watch two sets of Hollywood movie trailers while in it. The fMRI was used to measure blood flow through their brains' visual cortex, as they were watching the trailers. A computer used this data to virtually divide their brains into small three-dimensional cubes called voxels. Computer models of each voxel were then created, incorporating information about how that real-life section of the brain responded to different types of visual stimuli. In this way, the computer was able to match up specific voxel activity with specific visual patterns from the trailers - it acted as a Rosetta Stone, of sorts.

The resulting movie reconstruction algorithm was then fed 18 million seconds of random YouTube videos, which it matched up with what should be the corresponding voxel activity. For each image in the trailers, it then chose 100 images from the YouTube videos, whose voxel activity most closely resembled that of the trailer image. These 100 images were combined into one blurry composite image, that resembled the one image from the trailer. When strung together, those composite images presented a somewhat trippy yet recognizable facsimile of the complete trailer.

So far, the system can only reconstruct movie trailers that subjects have already viewed. As the UC Berkeley technology is developed, however, it is hoped that it could be used visualize what is happening in the minds of stroke victims, coma patients, and other people not able to adequately communicate. It could also be used to improve human-computer interfaces, such as those that allow handicapped individuals to control devices using their thoughts.

The video below shows parts of the original trailers, with the reconstructions playing alongside. Below it is a video that displays images from the trailers, with some of the YouTube images that were used to create their composite equivalents.

The research was published yesterday in the journal Current Biology.

Movie reconstruction from human brain activity

Movie reconstructions from human brain activity: 3 subjects

8 comments
8 comments
Salim Khalaf
Many years ago, I knew this will happen. One day in the future, analysis of \"memory\" chemicals in the brain of a murdered victim will show who is the murderer.
Joel Detrow
Fascinating and scary at the same time.
Jamie_S
This has already been demonstrated using cats as specimens in a film on youtube - they wired electrodes into a cat\'s brain and showed on a monitor what the cat was thinking/seeing. So, not exactly a new discovery, more likely the first time they\'ve gone public with it. When I\'ve found the link I\'ll post it.
Jamie_S
Further to my last comment, here you go...
http://www.youtube.com/watch?feature=player_embedded&v=piyY-UtyDZw
Tysto
\"The resulting movie reconstruction algorithm was then fed 18 million seconds of random YouTube videos\"
Okay. That\'s not very practical for broader use.
\"These 100 images were combined into one blurry composite image\"
Aaaaaand that\'s never going to yield decent results.
So, this is cool and interesting, but it\'s sort of designed to be terrible. When I first saw the result, I said, \"That doesn\'t look like Steve Martin so much as it looks like a random guy in a black T-shirt making a YouTube video.\" Since it\'s clear from the sample set that that\'s EXACTLY what it is, it suggests this has great potential if the output can be decoupled from YouTube.
Knowledge Thirsty
@Jamie_S: Great post. Not sure how real it is, but I think the big differentiator with the results of this article is the ability to retrieve the image from memory rather than view in real time.
Nitrozzy7
Pirates!
kenstru
Reminds me of the visual cortex recording technology depicted in Wim Wenders\' (then) futuristic epic, \"Until The End of the World\" - Max Von Sydow playing the brilliant & obsessed scientist whose masterpiece gizmo allows him transmit imagery to his blind wife (and then is later modded to record one\'s dreams).