Princeton is building a better selfie
Millions of selfies are snapped every day, all over the world. While there are plenty of tools out there to add filters to images, the end results are rarely the same as reality, with the proximity of the camera causing unflattering distortions. A Princeton-designed tool could change all that, allowing selfies to be adjusted, giving the impression that they're taken from a little farther away, and even from a slightly different angle.
It's no secret that the selfie is a huge phenomenon; it's been a massive trend for years now, spawning everything from the dreaded selfie stick, to quadcopters built to take perfect self-portraits. But the photos still routinely misrepresent the subject, making noses look too big, or chins too weak.
For a group of Princeton researchers, enough was enough; they set out on a mission to fix the selfie once and for all. The researchers developed a tool designed to subtly modify a person's face to create the illusion that a shot has been taken from a little farther away, at a distance a professional photographer might opt for. It can also be used to slightly alter the pose of the subject, making it seem as if the camera were positioned lower, higher, or to the side.
The team started by creating a model for generating digital 3D heads, using data from FaceWarehouse – a database of 150 people, each photographed in 20 different poses, made by researchers at China's Zhejiang University. That was combined with a Carnegie Mellon-developed program that's able to pick out some six dozen key reference points across an image of a face, such as the top of the head, and the corners of the eyes.
The Princeton team's tool adjusts the 3D head model so that it optically matches up with the reference points detected in a selfie, with the eye placement on the model matching that of the selfie, the nose being the right size and in the correct place, and so on.
From that point, the tool can be used to update the coordinates of the reference points to match adjustments made to the 3D model, positioning the camera farther form the subject, or adjusting the angle. The whole thing takes just seconds, and the results, though subtle, are quite striking.
According to the researchers, the reason the tool is so effective is that it doesn't change too much about the image:
"I believe the reason the synthetic image looks so good is that it has exactly the same pizel colors as the original photo – it's just that they have been moved around a little bit to provide the illusion that the camera had been in a different location," said senior study author Adam Finkelstein.
According to the team, adjusting selfies is just the beginning for the tech. Down the line, the researchers believe it could be used to create "live" portraits – like something you might have seen in a Harry Potter film – or even for editing video.
The researchers plan to continue to improve the tool, in part to improve how it deals with hair, which, thanks to its highly varied texture and color, can look distorted when an image is adjusted.
"We still have a lot of research to do," said project member Ohad Fried. "We are happy with what we achieved so far, but we look forward to learning how we can make these selfie transformations appear even more realistic."
Details of the research are due to be published online in the journal ACM Transactions on Graphics.
Source: Princeton University