Researchers can now 3D-model a room just from your eye reflections
Neural radiance field (NeRF) technology is starting to show some incredible capabilities in turning 2D images and videos into 3D models, but University of Maryland researchers are taking things to another level, using nothing but eye reflections.
In a study now published on the pre-print server arXiv, the researchers demonstrate how they're able to take multiple high-definition images of a person moving around a room, then zoom in to look at reflections in their corneas, flip them, remove color and detail the irises might be adding, treat them to remove the curved-mirror distortion, and use them to create 3D images.
Certainly, these 3D models aren't very high resolution; you can tell what the items are, but not in great detail – and the team had to use specific lighting to bring out the effect.
And it's hard to say exactly who'd need this sort of technology, and for what – shy of a shoehorned-in Mission Impossible-type scenario.
The researchers came up with one real-world scenario to try this gear out in: they zoomed in on eye reflections from video clips by Miley Cyrus and Lady Gaga, hoping to take advantage of quality close-up vision and favorable lighting situations. Unfortunately though, the resolution wasn't high enough, and the closest they could determine was that Miley Cyrus may have been looking at a lighting grid, and Lady Gaga may have been looking at something shaped very slightly like a person's torso.
Either way, it's a fascinating look at just how much information people can get from a scene. It reminds me a little of a wild idea MIT was investigating back in 2014, in which they managed to reconstruct some of the audio inside a sealed room by taking high-speed video of a potato chip packet and analyzing it for distortion to build an audio waveform.