Using touchscreens or pushbuttons to control devices isn't always practical, and voice commands may not work in loud environments. A new system offers an alternative, in that it utilizes acoustic waves which travel across the surface of existing objects.
Currently being developed at the University of Michigan, the experimental technology is known as SAWSense, with SAW standing for "surface acoustic waves."
Hardware-wise, it incorporates devices known as voice pickup units (VPUs), which are already utilized in bone conduction earphones. Each VPU contains an acoustic sensor which is housed within a hermetically sealed chamber that blocks out all ambient noise.
As a result, when a VPU is placed on a surface such as a tabletop, it is easily able to pick up the acoustic waves which travel through that surface as a person taps, scratches or swipes their finger across it. Loud noises in the environment don't adversely affect its ability to do so, as long as the source of those noises isn't vibrating against the surface being used.
A machine-learning-based algorithm on a linked laptop is able to match the different wave patterns up to the finger movements that produced them, with 97% accuracy so far. It in turn executes preprogrammed computer commands which correspond to each movement.
Importantly, SAWSense can also identify different activities which are performed on a surface, based on their distinctive surface acoustic waves. In lab tests, for instance, the technology could differentiate between whisking, chopping and food processing tasks performed on a kitchen countertop … and the possibilities don't stop there.
"This technology will enable you to treat, for example, the whole surface of your body like an interactive surface," said Yasha Iravantchi, a doctoral candidate in computer science and engineering. "If you put the device on your wrist, you can do gestures on your own skin. We have preliminary findings that demonstrate this is entirely feasible."
SAWSense is demonstrated in the video below. A paper on the system was presented last week in Hamburg, Germany, at the 2023 Conference on Human Factors in Computing Systems.
Source: University of Michigan