AI "painting" approach packs more emotion than pixelation
If you really want to sense the emotion in what someone is saying, it helps if you can see their facial expressions along with hearing their words. Doing so is impossible, however, when news programs pixelate or black out the faces of anonymous interviewees. Scientists have now developed a workaround, that uses artificial intelligence (AI) to "paint" those people's faces instead.
The system was developed by professors Steve DiPaola and Kate Hennessy from Canada's Simon Fraser University, working with assistant professor Taylor Owen from the University of British Columbia.
It starts by distorting the video image of a person's face, altering their facial proportions to make them less recognizable. This is initially done by a human computer operator, with the AI then adding a second level of random distortions – that randomization makes it impossible for anyone to see what the person originally looked like by reverse-engineering the process.
Next, the AI applies its painting process to the image. Based on techniques used by artists such as Picasso and van Gogh, this boosts the anonymization while simultaneously enhancing the subject's underlying facial expressions. The AI also takes the tone of subjects' voices into account when determining not only how to depict their faces, but also when choosing elements such as colors, which help convey emotion.
"When artists paint a portrait, they try to convey the subject's outer and inner resemblance," says DiPaola. "With our AI, which learns from more than 1,000 years of artistic technique, we have taught the system to lower the outer resemblance and keep as high as possible the subject's inner resemblance – in other words, what they are conveying and how they are feeling."
Plans now call for the technology to be tested with a partnering journalistic institution. The system could also have applications in 360-degree virtual reality.
You can see it in use, in the following video.