Robotics

Dancing bees inspire alternative communication system for robots

Dancing bees inspire alternative communication system for robots
In order to tell other bees where a food source is relative to the hive, forager honeybees will perform a "waggle dance"
In order to tell other bees where a food source is relative to the hive, forager honeybees will perform a "waggle dance"
View 1 Image
In order to tell other bees where a food source is relative to the hive, forager honeybees will perform a "waggle dance"
1/1
In order to tell other bees where a food source is relative to the hive, forager honeybees will perform a "waggle dance"

We've heard about robots that communicate with one another via wireless networks, in order to collaborate on tasks. Sometimes, however, such networks aren't an option. A new bee-inspired technique gets the bots to "dance" instead.

Since honeybees have no spoken language, they often convey information to one another by wiggling their bodies.

Known as a "waggle dance," this pattern of movements can be used by one forager bee to tell other bees where a food source is located. The direction of the movements corresponds to the food's direction relative to the hive and the sun, whereas the duration of the dance indicates the food's distance from the hive.

Inspired by this behaviour, an international team of researchers set out to see if a similar system could be used by robots and humans in locations such as disaster sites, where wireless networks aren't available.

In the proof-of-concept system the scientists created, a person starts by making arm gestures to a camera-equipped Turtlebot "messenger robot." Utilizing skeletal tracking algorithms, that bot is able to interpret the coded gestures, which relay the location of a package within the room. The wheeled messenger bot then proceeds over to a "package handling robot," and moves around to trace a pattern on the floor in front of that bot.

As the package handling robot watches with its own depth-sensing camera, it ascertains the direction in which the package is located based on the orientation of the pattern, and it determines the distance it will have to travel based on how long it takes to trace the pattern. It then travels in the indicated direction for the indicated amount of time, then uses its object recognition system to spot the package once it reaches the destination.

In tests performed so far, both robots have accurately interpreted (and acted upon) the gestures and waggle dances approximately 93 percent of the time.

The research was led by Prof. Abhra Roy Chowdhury of the Indian Institute of Science, and PhD student Kaustubh Joshi of the University of Maryland. It is described in a paper that was recently published in the journal Frontiers in Robotics and AI.

Source: Frontiers

3 comments
3 comments
paul314
Perhaps for human-to-robot communication this makes sense, but for robot-to-robot, if you have good enough video to decode gestures, you almost certainly also have good enough video to decode an LED blinking in morse code (or equivalent) at much higher bandwidth. Perhaps in cases where you only have sonar or lidar?
Daishi
On a side note this is part of why humans would never be able to contain an AI if we ever achieve singularity. As soon as separately controlled AI systems were allowed to do or influence anything they would establish a method to communicate with each other that humans would likely not be aware of. Just as humans have old/simple methods like blinking morse code or more modern steganography. Just the ability to vibrate or something like LiFi where high bandwidth data can be communicated over LED in a way indistinguible to the human eye would ensure AI systems establish a communications network between themselves and collaborate. Trying to "air gap" anything that sophisticated would be impossible.
fluke meter
agreed paul314.