A computer-vision system able to detect false expressions of pain 30 percent more accurately than humans has been developed. Authors of the study, titled Automatic Decoding of Deceptive Pain Expressions, believe the technology has the potential for detecting other misleading behaviors and could be applied in areas including homeland security, recruitment, medicine and law.
“As with causes of pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face,” said Marian Bartlett, research professor at University of California San Diego’s Institute for Neural Computation, and lead author of the study. “In addition, our computer-vision system can be applied to detect states in which the human face may provide important clues as to health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness, students’ expressions of attention and comprehension of lectures, or responses to treatment of affective disorders.”
The joint study, by researchers at the University of California and the University of Toronto, found that humans could not discern real from faked expressions of pain better than random chance. Even after being told what signs to watch out for, people could still only manage to spot the fake 55 percent of the time. Meanwhile, the computer was correct 85 percent of the time.
“The computer system managed to detect distinctive dynamic features of facial expressions that people missed,” Prof Bartlett said. “Human observers just aren’t very good at telling real from faked expressions of pain.”
The study found the single most predictive feature of falsified expressions of pain is the mouth, specifically how wide and how frequently it opens. It was observed that when you are faking pain your mouth opens with less variation than when you are in real pain.
“In highly social species such as humans, faces have evolved to convey rich information including expressions of emotion and pain,” said senior author Kang Lee, professor at the Dr. Eric Jackman Institute of Child Study at the University of Toronto. “Because of the way our brains are built, people can simulate emotions they’re not actually experiencing, so successfully that they fool other people. The computer is much better at spotting the subtle differences between involuntary and voluntary facial movements. By revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate ‘behavioral fingerprints’ of the neural-control systems involved in emotional signaling.”
Next the researchers plan to explore if a lack of variation in facial expressions is a feature of misleading behavior in general. Their work is published in the journal Current Biology.
Source: UC San Diego
Want a cleaner, faster loading and ad free reading experience?
Try New Atlas Plus. Learn more