As any professional photographer knows, setting up lights can be a hassle. This is often the case in the studio, but especially when shooting on location. Before too long, however, it may be possible to use hovering autonomous drones as light sources. In fact, that's just what a team from MIT and Cornell University has already done. Their system not only does away with light stands, but the light-equipped aircraft automatically moves to compensate for movements of the model or photographer.
In a demo of the system, the researchers used a Parrot AR.Drone quadcopter to create rim lighting, an effect in which only one edge of the subject is strongly lit. The drone was equipped with a continuously-shining halogen light, a photographic flash, and a Lidar (Light Detection and Ranging) unit. It was also communicating with an external computer.
UPGRADE TO NEW ATLAS PLUS
More than 1,500 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.UPGRADE
The photographer's camera, meanwhile, was linked to that same computer, sending signals to it 20 times a second. A program analyzed the images, and monitored the width of the brightly-lit area along the edge of the subject.
To start, the photographer indicated the direction from which they wanted the light to come. The copter responded by proceeding to that side of the subject. The photographer then indicated how wide they wished the brightly lit rim to be, as a percentage of the total lit area of the subject. This caused the copter to adjust its position, in order to create that effect.
When the subject or photographer moved, however, it would cause that width (and thus percentage) to change. Using an algorithm that was constantly assessing the image from the camera, the drone was able to compensate for those movements, restoring the lit area to its desired width.
Rim lighting is particularly tricky, so producing more conventional lighting should presumably be easier. Likewise, for stationary subjects, the ability to compensate for movements won't be a crucial feature.
The system was developed by Manohar Srikanth when he was an MIT grad student, along with MIT's Prof. Frédo Durand and Cornell's Kavita Bala. It will be presented in August, at the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging.
Source: MITView gallery - 3 images