Alexei Efros and his team of cunning robotics researchers at Carnegie Mellon University have developed an image matching algorithm with which computers can identify similar images regardless of medium. Like humans, the system can match sketches and paintings with photographs of similar subjects, and so perform tasks that have traditionally posed problems to machines, such as pairing a simple sketch of a car with a photograph of the same.
Previous methods have relied on images having similar colors, shapes and composition but as assistant research professor of robotics Abhinav Gupta explains, this can result in false positives, with the computer matching, say, outdoor photos with similar proportions of overcast sky, regardless of the specific subject.
The team's algorithm, however, identifies the unique visual elements and properties of an image compared to the others in the pool, then looks for matches on that basis. As a result, their software can match photographs of exterior scenes at different times of year, or match a painting to a photograph of the same subject.
As Efros puts it: "The language of a painting is different than the language of a photograph. Most computer methods latch onto the language, not on what's being said."
The results certainly appear to be impressive. Analyzing a very basic pen sketch of the profile outline of a car, the top matches returned by the algorithm are all profile photographs of cars (all facing the same way as in the sketch). A more detailed sketch of a generic sports car at an oblique angle returns photos of a variety of sports cars taken from about the same angle.
Beyond the obvious applications in image collation and searching, the algorithm may in some instances remove the need for rephotography: the act of literally re-photographing an object after a period of time to compare it to a historical photograph. It could also be used to work out the locations from which paintings of landmarks were made.
The Carnegie Mellon team's research will continue, applying the algorithm to object detection in computer vision, and to speeding up the matching process.
The team's paper, Data-driven Visual Similarity for Cross-domain Image Matching, is available online (along with the source code) and will be presented at SIGGRAPH Asia in Hong Kong on December 14. The research page contains an embedded video in which the results of the algorithm at work can be seen.
;)