In a development sure to send conspiracy theorists into a tizzy, researchers at the Max Planck Institute for Informatics (MPII) have developed video inpainting software that can effectively delete people or objects from high-definition footage. The software analyzes each video frame and calculates what pixels should replace a moving area that has been marked for removal. In a world first, the software can compensate for multiple people overlapped by the unwanted element, even if they are walking towards (or away from) the camera. See the incredible video demonstration after the break.
The software was developed by a team led by Prof. Dr. Christian Theobalt, head of the Graphics, Vision, and Video research group at MPII and Jan Kautz, Professor of Visual Computing at University College London. It represents a significant improvement over prior inpainting methods, which worked with low resolution video but generated poor results with HD footage. Their method still generates some artifacts, but they'd be almost imperceptible to the untrained eye.
The software was inspired by shift maps – which take a portion of footage from one moment in time and move it to another moment in time to fill in the occluded area. Similar software has been used to remove people from Google's Street View for privacy concerns, but it has huge potential for the visual effects industry.
Due to the way shift maps work, the software is only effective when applied to scenes with a static background, where parts of the scene in some frames can provide a clean plate to draw from. In some instances the automatically generated result may not work, so the software has various tools to correct for errors.
Removing and replacing unwanted elements from a filmed sequence is a common job for effects artists working on Hollywood blockbusters. It could help artists replace actors with computer-generated creatures – as they did with Gollum in the film adaptations of The Hobbit and The Lord of the Rings – more quickly and easily than ever before. Whereas most of the time creatures are simply imagined by the actors on set, a physical stand-in for the creature adds greater realism and impact to a scene, by interacting directly with the actors and affecting their surroundings in ways that would be difficult to cheat with traditional approaches.
As more and more films use the techniques pioneered by Andy Serkis as Gollum – who recently formed his own studio to promote the art of Performance Capture – effects artists will have to do more and more inpainting, so there's certainly a market for this kind of software. The same researchers are developing solutions to multiple problems associated with visual effects which should make it cheaper, faster, and easier for the VFX industry to meet the demands of Hollywood studios and audiences alike.
Source: Max Planck Institute for Informatics
If they're going to call their demo "How Not to be Seen" it needs at least one explosion. ;-) (Look up the Monty Python skit of the same name.)
How is the algorithm able to remake the blocked person / object after impainting? It is using some Image Registration or Image Warping technique??