Good Thinking

Learning from photos, a deep neural network identifies deepfakes

The distinctive boundaries of the image of this cat give away the fact that it was digitally added to the photo
University of California, Riverside
The distinctive boundaries of the image of this cat give away the fact that it was digitally added to the photo
University of California, Riverside

They're known as deepfakes – photos or videos that have been very convincingly manipulated to depict people saying or doing things that they never actually said or did. They're potentially quite the problem, so an experimental new deep neural network has been designed to spot them.

Led by Prof. Amit K. Roy-Chowdhury, a team at the University of California, Riverside started with a large dataset of both manipulated and non-manipulated photos. The researchers already knew which ones were which, and computer-labelled them accordingly.

In the manipulated pictures, they highlighted the pixels along the boundaries of objects that had been digitally added to the shot – it had previously been established that in faked photos, those boundaries tend to be smoother or otherwise different than those of objects that were actually in the shot when it was taken. Although those differences can't necessarily be detected by the human eye, a pixel-by-pixel examination done by a computer will pick them up.

The labelled dataset was then fed into a deep neural network, which is a set of algorithms modelled loosely after the human brain, designed to recognize patterns in raw data. Using the images, that network learned to identify the telltale boundaries of digitally-added images. When it was subsequently shown photos from outside of the dataset, that it hadn't seen before, it was able to spot the fakes "most of the time."

Although the system currently only works on photos, the team is now working on applying it to videos, where it would most likely just analyze individual still frames. That said, the technology may never be 100-percent accurate, and could likely end up being used to flag suspicious images that are subsequently analyzed by people.

"If you want to look at everything that's on the internet, a human can't do it on the one hand, and an automated system probably can't do it reliably," says Roy-Chowdhury. "So it has to be a mix of the two."

A paper on the research was recently published in the journal IEEE Transactions on Image Processing.

Source: University of California, Riverside

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
3 comments
Grunchy
I was thinking of why someone would try to deceive using photos (for example, fake alibi photos) and it reminded me of accusations against the Kardashians / Jenners (who apparently doctor their own photos all the time). Which in turn made me think about women and their make-up industry. It seems no matter which way you turn, somebody’s showing you lies.
joeblake
Given that digital manipulation is on the increase, especially in politics, this should be a very useful tool in highlighting "fake news", both in the areas of "this is what you said" and "I never said that".
christopher
Total waste of time. As soon as an AI can spot the fakes, an adversarial learning system can use that to produce better fakes. It's an arms race which only the "bad guys" can win, because the effort to be "more bad" is orders of magnitude easier than the effort to "spot the bad".