Good Thinking

Animal-sorting AI tackles reams of camera trap photos

A camera trap photo, which the AI system correctly reported as a picture of two impala standing
Snapshot Serengeti
A camera trap photo, which the AI system correctly reported as a picture of two impala standing
Snapshot Serengeti

So-called "camera traps" – motion-activated cameras that automatically take photos of passing wildlife – are a great way of determining the type and number of animals present in a given area. Now, they could prove even more useful, thanks to a new artificial intelligence (AI) system.

The one challenge with existing camera traps lies in the fact that people need to manually go through all the photos, determining and recording the species/amount of animals present in each image. Citizen science group Snapshot Serengeti currently uses crowdsourced volunteers to perform the task, although it takes the people a long time to do so, and it can be challenging finding enough volunteers.

With that in mind, the group's leaders wanted to find out if AI could be used to speed up and automate the process. To that end, they contacted the University of Wyoming's associate professor Jeff Clune, who is also a senior research manager at Uber's Artificial Intelligence Labs.

Working with colleagues from other universities, he proceeded to develop a deep learning algorithm that identifies animals in up to 99.3 percent of photos with a 96.6 percent accuracy rate, which is approximately the same rate as is managed by humans. In order to "train" that algorithm, Snapshot Serengeti supplied Clune with 3.2 million camera trap images in which the type, number and behaviour of animals had been digitally labelled by more than 50,000 volunteers over the course of several years.

"Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing," says Margaret Kosmala, a Snapshot Serengeti leader and Harvard University researcher, who helped develop the technology. "It will tell you if they are eating, sleeping, if babies are present, etc. We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional three million images. That is a lot of valuable volunteer time that can be redeployed to help other projects."

A paper on the research was published this week in the journal PNAS.

In related news, teams at both the University of Southern California and tech firm Neurala have recently developed AI systems that are capable of differentiating between animals and poachers in video shot by drones.

Source: University of Wyoming

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
0 comments
There are no comments. Be the first!