Electronics

Scientists take a backhanded approach to smartwatch interfaces

Scientists take a backhanded approach to smartwatch interfaces
The WatchSense prototype in use – in the final version, the depth sensor would be incorporated into the watch
The WatchSense prototype in use – in the final version, the depth sensor would be incorporated into the watch
View 1 Image
The WatchSense prototype in use – in the final version, the depth sensor would be incorporated into the watch
1/1
The WatchSense prototype in use – in the final version, the depth sensor would be incorporated into the watch

Although smartwatches may indeed be getting capable of more and more functions, their touchscreens will have to remain relatively small if they're still going to fit on people's wrists. As a result, we've recently seen attempts at extending the user interface off of the screen. One of the latest, known as WatchSense, allows users to control a mobile device by moving the fingers of one hand on and above the back of the other.

The WatchSense concept was developed by researchers from the Max Planck Institute for Informatics, the University of Copenhagen and Aalto University in Finland.

In its current proof-of-concept form, it incorporates a small 3D depth sensor which is worn on the forearm. That sensor is able to ascertain the positions of the user's index finger and thumb as they move on the back of the hand that's wearing the watch, as well as in the space above it. Custom software assigns different commands to different movements, allowing users to control various functions on a linked smartphone.

Although the sensor is presently separate from the user's smartwatch, the team believes that it will soon be possible to incorporate miniaturized depth sensors directly into watches.

In lab tests, it was found that WatchSense allowed users to adjust music volume and select songs more quickly than they could using a smartphone's Android music app. It was also found to be "more satisfactory" than a touchscreen for virtual and augmented reality-based tasks, along with a map application and the control of a large external screen.

The technology will be demonstrated at the upcoming Conference on Human Factors in Computing, taking place in Denver, Colorado.

Source: Saarland University

1 comment
1 comment
Tanstar
Google's voice controls are excellent. I don't see the need.