Most of us probably don't like the idea of some stranger finding out who we are, based only on an online photo of us. Thanks to the facial recognition systems used by social media sites, however, such a thing is becoming increasingly possible. Scientists recently decided to do something about it, by turning a couple of artificial intelligence (AI) systems against one another.

At the University of Toronto, Prof. Parham Aarabi and grad student Avishek Bose started by designing two AI-based neural networks. One of these used the same techniques as existing facial recognition systems, to identify people in photos. The other network sought to thwart the first one, by slightly altering the aspects of those photos that were being used to identify the people.

"The disruptive AI can 'attack' what the neural net for the face detection is looking for," says Bose. "If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they're less noticeable. It creates very subtle disturbances in the photo, but to the detector they're significant enough to fool the system."

The two networks went back and forth for a while, each one learning what the other was doing and trying to compensate for it. What ultimately resulted was an algorithm that could be applied to photos of faces, making them nearly facial recognition-proof yet still recognizable to people who knew them.

Aarabi and Bose tested the system on the existing 300-W face dataset, which consists of photos of over 600 faces covering different ethnicities, lighting conditions and environments. Without the algorithm being applied to those images, a facial recognition system was able to accurately identify almost 100 percent of the people. Once it was applied, however, that rate dropped to 0.5 percent.

It is now hoped that the algorithm could be integrated into a publicly-available app or website, which people who are concerned about their privacy could use to treat their photos before posting them.