If you ever wished you had an angel at your shoulder to give tips on how to carry out a difficult job, a digital version may not be that far off. A team of scientists at Carnegie Mellon University are working on a wearable cognitive assistance computer system named after the angel Gabriel that observes what a person is doing, provides prompts to help in completing tasks in real time, and avoids being a pest when not needed.
Expert "coaching" systems are one of the great promises of the digital age with powerful algorithms backed by massive databases helping people to solve problems or teaching them to carry out tasks. It seems like a straightforward idea. The internet is the world's single largest repository of information, computer vision is becoming much more advanced, and supercomputers running cognitive algorithms like IBM's Watson have a tremendous problem solving ability, so such systems should be easy to create ... but they aren't.
The cloud has a huge potential for creating a system that can guide a user through a task, but there are a number of problems that need to be overcome. For one thing, a wearable system isn't of much use if it just rambles off a list of steps like a manual or a training video. This would result in a string of needless instructions, such as when you use a GPS to get out of the airport carpark and it keeps giving directions even though he know the way home perfectly well.
Another problem is the time lag. It takes time for data to go from a wearable device, into the cloud, to the computing platform, and then back again with the answer. If this takes too long, that answer will be useless, so a cognitive assistance system needs to be fast as well as powerful.
Funded by a four-year US$2.8 million National Science Foundation grant, the aim of the Gabriel project is to produce a system that watches what the user is doing, assesses the situation, and offers advice when needed while shutting up the rest of the time.
Gabriel is currently in the proof-of-concept phase and can guide a user through the process of assembling LEGO models, freehand sketching, or playing Ping Pong. The latter, which involves Gabriel prompting to player to move left or right, demonstrates the necessity for speed for such a system to be practical
Gabriel works in conjunction with wearable vision system like Google Glass, which allows it to monitor what the user is doing. The key to the system is called a "cloudlet" and gives Gabriel what the team describes as a robot-like sensing and task planning with the user doing the actual work. Conceived by Mahadev Satyanarayanan, professor of computer science and the principal investigator for Gabriel, cloudlets are essentially data centers that support multiple mobile users.
What makes the cloudlets different is that they are stationed on cell towers or in buildings in close proximity to the users. In this way, the data only has to make one wireless connection rather than the tens or even hundreds that a typical cloud connection must navigate. The team says that this reduces the roundtrip time for data communications from a typical 70 milliseconds to a few tens of milliseconds or less.
Carnegie Mellon says that the team is working on improving Gabriel's computer vision while adding audio and location sensing. Its first applications will be in areas of special skill or knowledge, but it may one day see wider applications.
"Ten years ago, people thought of this as science fiction," says Satyanarayanan. "But now it's on the verge of reality."Source:
Want a cleaner, faster loading and ad free reading experience?
Try New Atlas Plus. Learn more