C-Face tech "sees" people's expressions, even through masks
With people wearing masks so much of the time now, it can be difficult to tell what expression is on someone's face. A new system can reportedly do so, though, utilizing cameras mounted on their headphones.
Known as C-Face, the experimental setup was developed by a Cornell University team led by Asst. Prof. Cheng Zhang. It incorporates two miniature computer-connected RGB cameras, which are positioned below the subject's ears on a third-party set of headphones.
By analyzing images of the changing contours of the person's cheeks (which the cameras shoot the back of) the system is able to ascertain the current position of 42 of their key facial feature points. This data is in turn used to determine the present shape of their mouth, eyes and eyebrows – these combined shapes make up their overall expression, which the system indicates by displaying one of eight corresponding emojis on a computer screen.
In tests conducted on nine volunteers, the new technology was used alongside an existing state-of-the-art setup that tracks the position of facial landmarks. The latter does so utilizing frontal cameras that capture images of the entire unmasked face.
As compared to that system, C-Face had a margin of error of less than 0.8 mm. Additionally, the emojis that it displayed reflected the person's real expression with an accuracy rate of over 88 percent. That figure should increase as the system is developed further.
Down the road, C-Face could also be used for applications such as the silent hands-free control of computers in quiet settings like libraries, with users putting on specific expressions for specific commands.
Source: Cornell University