Wearables

Backpack AI system helps the visually-impaired navigate safely

Backpack AI system helps the visually-impaired navigate safely
A backpack AI system and cameras (not pictured) could help visually-impaired people safely navigate streets
A backpack AI system and cameras (not pictured) could help visually-impaired people safely navigate streets
View 2 Images
A backpack AI system and cameras (not pictured) could help visually-impaired people safely navigate streets
1/2
A backpack AI system and cameras (not pictured) could help visually-impaired people safely navigate streets
Views of the backpack AI system
2/2
Views of the backpack AI system

Autonomous vehicles and robots use elaborate suites of sensors and cameras to make sense of their surroundings – but visually-impaired people still get by with canes and guide dogs. Now, engineers have developed a voice-activated wearable system that can track obstacles in real-time and describe a person’s surroundings.

This new visual assistance system is made up of several components that can be worn without much more bulk than you might already be wearing whenever you leave the house. It’s just a vest or fanny pack, a backpack and a pair of earphones. The team says that hiding the electronics was a key goal, so users don’t look like cyborgs walking down the street.

The vest or fanny pack are packing a series of cameras – a 4K camera that provides color information, and a pair of stereoscopic cameras that map depth of field. This visual information is then fed to the brains of the operation, stashed in the backpack.

There, a computing unit like a laptop or Raspberry Pi runs an AI interface called OpenCV’s Artificial Intelligence Kit with Depth (OAK-D), which uses neural networks to analyze the visual data. It also contains a portable battery that provides up to eight hours of use, and a USB-connected GPS unit.

Views of the backpack AI system
Views of the backpack AI system

The crunched visual data is then relayed via Bluetooth to a pair of earphones, letting the user know what’s around them. It can warn of obstacles of different shapes, sizes and types, and declare where they are in relation to the user, using descriptors like front, top, bottom, left, right and center.

So, for instance, the system can inform someone walking down the street that they’re approaching a trash can on their “bottom, left” or a low-hanging branch with “top, center.” Tripping hazards like curbs or stairs can be spotted as changes in elevation, and the system can even recognize key things like Stop signs or crosswalks as they approach a corner.

Users can also issue voice commands to ask for more information. Saying “describe” will make the system respond with a list of what’s around them and where, such as “car, 10 o’clock,” “person 12 o’clock,” and “traffic light, 1 o’clock.”

Specific places can also be saved for future reference with commands like “save location, coffee shop.” Later, when you want to get back there, the user can say “locate coffee shop” and the system will give directions and say how far away it is.

Other high-tech aids for visually-impaired people are emerging, like laser-ranging canes that sense objects and terrain changes, and Toyota’s guide collar that buzzes on the left or right side to let people know which way to turn. But this new system looks far more detailed and potentially allows people with limited vision to navigate the world independently.

It’s still early days, but the team hopes to fast-track the system by making the project non-commercial and open source.

The system can be seen in action in the video below.

Visual Assistance System for the Visually Impaired

Source: Intel

No comments
0 comments
There are no comments. Be the first!