Military

US Army reads soldier's brain waves to speed up image analysis

View 2 Images
Researchers at the US Army's MIND Lab are using EEGs to help speed up image analysis
Anthony Ries instructs Pfc. Kenneth Blandon on how to play a computer game that uses eye-tracking technology
US Army
Researchers at the US Army's MIND Lab are using EEGs to help speed up image analysis

Technology, from satellites to drones, has dramatically increased the amount of imagery being gathered by military intelligence, posing a daunting task for the analysts that must look at and and evaluate it. Researchers at the US Army's Mission Impact through Neurotechnology Design (MIND) Lab at the Aberdeen Proving Ground, Maryland are looking to speed things up by leveraging the power of the human brain.

Despite all sorts of advances in computer search algorithms, the most reliable image analysis tool is still the Eyeball Mark I backed up by the human brain. Currently, analysts to start with large images that they visually scan from the top left corner, moving left to right, then down, while searching for specific items in a painstaking, time-consuming manner. The result is that there's a huge lag between collecting the data and presenting results to soldiers in the field fast enough for the data to be useful.

Researchers at the MIND Lab are working on a way to use brainwaves as a way to remove this bottleneck. The human brain is much faster at image processing than any computer, and if this ability can be tapped into directly, it has the potential of vastly speeding up what is now a very tedious job.

"What we are doing is basically leveraging the neural responses of the visual system," says cognitive neuroscientist Anthony Ries. "Our brain is a much faster image processor than any computer is. And it's better at detecting subtle differences in an image."

The MIND Lab experiment had a soldier volunteer that was hooked up to electroencephalogram (EEG) select one of five categories – boats, pandas, strawberries, butterflies, or chandeliers. The soldier keep his selection to himself and was then placed in front of a computer monitor that displayed a series of images, each of which fell into one of these five categories, at a rate of roughly one per second. During this process, the soldier was asked to keep count of how many of the images fell into the category he selected and by analyzing the soldier's brain waves the computer was able to determine which category the soldier was focusing on.

To make this technique applicable for image analysis, Ries and his team split up larger images into bite-sized chunks called "chips." These are flashed on a screen at a rate of up to five images per second while an EEG monitors the subject's brainwaves and notes when an image produces a spike of interest.

"Whenever the soldier or analyst detects something they deem important, it triggers this recognition response," says Reis. "Only those chips that contain a feature that is relevant to the soldier at the time – a vehicle, or something out of the ordinary, somebody digging by the side of the road, those sorts of things – trigger this response of recognizing something important."

Ries stresses that the analyst still has to go over the entire image, but by viewing it as a rapid series of smaller pieces, the process is much faster. Part of the reason for this is that the computer can automatically highlight the image. Instead of marking the image by hand, all the analyst has to do is think "of interest" or "not of interest" and the machine does the rest.

One problem that the team encountered is how to deal with "noise." When the soldier in the test clenched his jaw, this produced electronic noise that the EEG picked up. This made it more difficult for the computer to analyze the subject's brainwaves. It also raised the problem of how to deal with things that are expected to occur in the workplace, such as someone speaking to the analyst while they're hooked to the computer. Since this is almost certainly going to be happen, such as when receiving instructions, the algorithm needs to be able to take that into account.

The team is also looking to integrate eye-tracking technology into the system.

"One thing we have done is instead of having people view images at the center of the screen, we're leveraging eye-tracking to know whenever they fixate on a particular region of space," says Reis. "We can extract the neural signal, time-locked to that fixation, and look for a similar target response signal. Then you don't have to constrain the image to the center of the screen. Instead, you can present an image and the analyst can manually scan through it and whenever they fixate on an item of interest, that particular region can be flagged."

Ries says the ultimate goal of the research is to develop a system that allows image analysts to sort through large volumes of image data more rapidly without sacrificing accuracy and by relying on the human brain's natural image processing and pattern recognition abilities.

Source: US Army

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
1 comment
Daishi
The US Army is not the right institution to get the most out of this type of research. Work on things like the Image Net Large Scale Visual Recognition Challenge is likely miles ahead of what the Army could develop on their own.
At best the Army's method would serve as maybe an interrogation technique. The Air Force is also already doing a ton of work around analyzing things like satellite imagery. Where this gets really frightening is with the ARGUS-IS drone based surveillance platform. It will transform intelligence gathering.