While digital cameras have made it easier to take self-portraits thanks to the inclusion of self-timers, face, smile and motion detection, and the positioning of displays on the front of the camera - as seen with Samsung's DualView camera - changing the framing or altering settings still requires the user to run back to the camera itself to get things right. Now researchers at the University of Tsukuba's Department of Computer Science in Japan have developed technology that lets shutterbugs put themselves in the picture and snap a pic using Kinect-like hand gestures.
The initial system devised by Shaowei Chu and Jiro Tanaka requires a camera either with a front-facing viewfinder, or one that can be flipped around to face the front. Using a computer vision algorithm created by the researchers, the viewfinder - preferably of a decent size - displays an augmented live video view that can control various camera functions from a distance by tracking and recognizing the subject's hand gestures in real time.
The algorithm is able to detect the contour of the hand and the fingertips of the user, highlighting the detected hand in green and the detected fingertips in red. In the version of the system detailed in the researcher's study, users are able to pan and tilt the camera and trigger the shutter.
To pan and tilt the user motions over four direction arrows located in the middle-lower part of the display, while triggering the shutter requires the user to hold a finger over a camera icon on the center-right of the display. The user needs to hold a finger steady over the camera icon for one second, after which a countdown from three to one will be displayed before the shutter is activated.
For the study, the researchers used a Logitech Orbit AF Camera capturing video at 640 x 480 pixel resolution at 30 fps, 1024 x 768 at 15 fps, and 1600 x 1200 at 5 fps. The camera had a 189-degree field of pan and 102-degree field of tilt, while the display measured 30-inches diagonally.
This is obviously too large for inclusion on a portable digital camera but the researchers say their preliminary experiments show such an interface is promising for remotely manipulating digital cameras for taking self-portraits and they plan to examine new interaction techniques for use with small or screen-less live view cameras.
They also plan to add new functions and improve the accuracy of the system's pan and tilt and say the interface could also be adapted to recognize head movements, such as a shake or a nod.
Source: University of Tsukuba Interactive Programming Laboratory via New Scientist