AI & Humanoids

US Navy introduces 'Octavia' robot to public

US Navy introduces 'Octavia' robot to public
The US Navy's Octavia robot
The US Navy's Octavia robot
View 3 Images
The US Navy's Octavia robot
1/3
The US Navy's Octavia robot
2/3
3/3
View gallery - 3 images

If members of the armed forces are going to be regularly interacting with robots, and it seems likely that such will be the case, then they had better be comfortable around those robots. The last thing a soldier, pilot or sailor needs is to be staring at some creepy-looking humanoid machine, and saying, “Um, listen, I want you to... ah, screw it, I’ll do it myself.” That’s the thinking behind an initiative from the US Navy Center for Applied Research in Artificial Intelligence (NCARAI), which has been working on natural human-robot interaction. If sailors can communicate with a robot through human-to-human style speech and gestures, it is thought, then they will be able to concentrate more on the task at hand, and less on the interface. NCARAI’s latest attempt at an easy-to-relate-to robot, named Octavia, was presented to the public for the first time recently in New York City.

Octavia rolls around on a wheeled Segway-like base, but on top she has a fully-articulated head, arms and hands. Her face even has a moving mouth, eyeballs, eyelids and eyebrows. While such features might seem frivolous, they are essential to making sure sailors know that she is “getting” what they’re saying, and that they’re properly interpreting her reactions. She was in NYC from May 26th to June 2nd for Fleet Week, an annual exhibition of Naval displays and demonstrations.

Officially classified as an MDS, or Mobile/Dextrous/Social robot, Octavia is not surprisingly packed full of technology. She utilizes both laser and infrared rangefinders, has color CCDs in each eye, and four microphones in her head.

Thanks to her Fast Linear Optical Appearance Tracker (FLOAT) system, Octavia can track the position and orientation of complex objects (such as faces) in real time, from up to ten feet away. Head Gesture Recognizer looks out for head nods and shakes, while Set Theoretic Anatomical Tracker (STAT) tracks head, arm and hand positions. All this data is sent to a VLAD Recognizer, which tries to figure out what the person is doing or indicating via their gestures. The robot learns through NCARAI’s “modified cognitive architecture” called ACT-R/E, which is designed to replicate the methods in which people learn.

Octavia was designed by MIT’s Personal Robotics Group, and Xitome Design... who will apparently sell you your own Octavia, if you’re interested.

Via: Fox News.

Official MDS Robot Video - First Test of Expressive Ability

View gallery - 3 images
1 comment
1 comment
Justin Ryan
The whole time I was looking at this I was thinking about Casper- The friendly ghost!