Science

GoPro trains neural net to think like a dog

GoPro trains neural net to think like a dog
The researchers used an Alaskan Malamute to train its AI
The researchers used an Alaskan Malamute to train its AI
View 2 Images
As well as video, the team recorded the dog's motions to help teach its AI model
1/2
As well as video, the team recorded the dog's motions to help teach its AI model
The researchers used an Alaskan Malamute to train its AI
2/2
The researchers used an Alaskan Malamute to train its AI

Researchers at the University of Washington are using GoPro cameras to train AI neural networks to behave and plan like dogs. By strapping a GoPro camera to an Alaskan Malamute's head, the team recorded 380 dog's-eye-view video clips which were then shown to the neural net to teach it how dogs are likely to behave in a given situation. The team calls this DECADE, or the Dataset of Ego-Centric Actions in a Dog Environment.

The team recorded video in over 50 locations, including living rooms, stairways, streets and dog parks. While recording, the dog carried out various doggy deeds like walking, following, fetching, tracking moving objects, and "interacting" with other dogs.

As well as capturing video, the researchers recorded the dog's movements by tracking the positions of its body, legs and tail, over time. This data was captured using an Arduino, along with audio. By syncing the audio with the sound recorded by the GoPro camera, the video footage could be accurately synced with the data on the dog's body position using a Raspberry Pi computer.

Essentially the researchers then asked the AI model "what happens next?" when shown a series of frames, modelling the behavior of what it thinks it will do. For example, if the dog is shown a treat it's likely to sit, and if it sees a ball thrown it's likely to chase it.

As well as video, the team recorded the dog's motions to help teach its AI model
As well as video, the team recorded the dog's motions to help teach its AI model

They also asked the model to plan what the dog will do between a starting video frame and an end frame, with no information given on what happens in between. The team designed a neural network to tackle this problem – an inherently complex one since, with each action the dog takes, the state of the world changes.

The researchers say their work shows promise for the field of visual intelligence, with its model both able to predict the dog's movements in certain situations in terms of both what she would do and how she would do it. This is despite the model being given no tasks or prior knowledge of the desired end result. "Our experiments show that our models can make predictions about future movements of a dog and can plan movements similar to the dog," the team writes.

The work could one day inform the development of robot canines, but the team also points out that these learned skills are applicable to other tasks, such as the identification of safe routes through an environment. The team thinks its model could be improved by incorporating touch, sounds and smell, as well as by gathering data from many dogs rather than the one.

A draft of the team's research, Who Let The Dogs Out? Modeling Dog Behavior From Visual Data, has been published on arxiv.org. It's due to be presented at the computer vision event CVPR 2018 in June.

Source: University of Washington

No comments
0 comments
There are no comments. Be the first!