Doing a web search for an item that you remember seeing can be difficult, if you don't know what that thing is called and you don't have a picture of it. If only you could just draw a rough sketch of what you saw on a touchscreen, and use that as your search criteria. Well, you soon may be able to, thanks to the new Sketch-a-Net computer program.
Developed by a team at Queen Mary University of London, Sketch-a-Net is an example of a deep neural network. This means that it mimics the human brain, utilizing machine learning algorithms to build upon what it already knows every time it performs a new task.
So far, the program has been able to correctly identify the subject of peoples' touchscreen sketches with an accuracy rate of 74.9 percent – by contrast, human test subjects haven't done quite as well, managing 73.1 percent. Interestingly, part of its success comes from keeping track of the order in which the lines of each sketch are drawn.
The difference becomes more apparent when it comes to picking up on defining details in sketches. When it came to differentiating between drawings depicting the similar subjects "seagull," "flying bird", "standing bird" and "pigeon," for example, Sketch-a-Net was correct 42.5 percent of the time, compared to the 24.8 scored by humans.
It is hoped that the technology will ultimately allow people to perform searches simply by drawing pictures of what they're looking for, plus it could be used to match police sketches to existing mugshots, and to enhance scientists' understanding of visual perception.
"It’s exciting that our computer program can solve the task even better than humans can," says Dr. Timothy Hospedales, co-author of the study. "Sketches are an interesting area to study because they have been used since prehistoric times for communication and now, with the increase in use of touchscreens, they are becoming a much more common communication tool again."
Source: Queen Mary University of London