Robotics

Brain/gesture-reading system lets users stop and correct errant robots

View 3 Images
The testing setup for the MIT system
Joseph Del Preto, MIT CSAIL
The system works on anyone, requiring no special setup for individual users
Joseph Del Preto, MIT CSAIL
In lab tests, seven volunteers took turns being hooked up to both the EEG and EMG, then watching as a Baxter performed a task in which it power-drilled one of three target locations on a mock airplane fuselage
Joseph Del Preto, MIT CSAIL
The testing setup for the MIT system
Joseph Del Preto, MIT CSAIL
View gallery - 3 images

Last year, we heard about an MIT-designed system that detects when someone has observed a robot making a mistake, and that stops the robot as a result. A new addition now allows that person to let the robot know what it should be doing, using hand gestures.

In the original setup, an electrode-laden electroencephalography (EEG) cap was used to monitor the electrical activity of a person's brain, as they watched a Baxter humanoid robot performing a simple task.

If they saw that the robot had made a mistake, their brain would produce a signal known as an error-related potential (ErrP) – our brains automatically generate ErrPs whenever we see a mistake being made. The EEG cap would detect the signal, and the robot would thus be stopped mid-task.

One of the neat things about the setup was that unlike other systems that allow people to control robots with their brain, this one didn't require them to consciously "think" in a specific way. Instead, it simply reacted to what their brain did reflexively. Additionally, it worked on anyone, requiring no special setup for individual users.

The system works on anyone, requiring no special setup for individual users
Joseph Del Preto, MIT CSAIL

Now, a team led by Prof. Daniela Rus has added an electromyography (EMG) sensor to the mix. Applied to the observer's right forearm, it uses electrodes to detect the electrical activity of the muscles within that arm. Different patterns of activity indicate that different gestures are being made.

In lab tests, seven volunteers took turns being hooked up to both the EEG and EMG, then watching as a Baxter performed a task in which it power-drilled one of three target locations on a mock airplane fuselage. The target it was supposed to drill was indicated by one of three corresponding LEDs illuminating briefly, although the robot ignored that, choosing its target randomly each time.

Whenever the Baxter went for the wrong target, the observer produced an ErrP, automatically stopping it. They then flicked their hand to either the left or right, indicating which direction the robot should move the drill – the number of flicks indicated the number of spaces over it should move.

Utilizing this setup, the system allowed the people to consistently stop and correct the robot. Various other gestures could be programmed into the system down the road, letting users tell robots to do other things.

"We'd like to move away from a world where people have to adapt to the constraints of machines," says Rus. "Approaches like this show that it's very much possible to develop robotic systems that are a more natural and intuitive extension of us."

The system can be seen in use, in the following video.

Source: MIT Computer Science and Artificial Intelligence Lab via EurekAlert

View gallery - 3 images
  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
0 comments
There are no comments. Be the first!