Along with providing users with sights and sounds, some VR systems also deliver tactile sensation to the hands. A new ultrasound-based setup, however, lets users feel the virtual world on and in their mouths – without making physical contact.
The inexpensive system is being developed within Carnegie Mellon University's Future Interfaces Group, by a team including PhD student Vivian Shen, post-doctoral fellow Craig Schultz and Assoc. Prof. Chris Harrison. It incorporates a phased array of 64 tiny ultrasound transducers, which are attached to the underside of an existing VR headset so they sit above the wearer's mouth.
Working in coordination with images and actions that are viewed and heard through the headset, the transducers emit acoustic waves that travel through the air, to the mouth. The firing of the transducers is timed in such a way that their waves interfere with one another at specific points in space, those points being on the user's lip, teeth or tongue.
The resulting vibrations are experienced as tactile sensations, replicating the feel of activities such as drinking from a water fountain, or even kissing. Depending on what sort of action is being simulated, the ultrasound can take the form of focused pulses, swipes across the mouth, or persistent vibrations.
When the system was tested on 16 volunteers, it was found to perform particularly well at delivering the sensations of brushing the teeth, feeling raindrops coming through an open window, and feeling a spider walking across the lips.
It was less successful when it came to sensations like walking through cobwebs, as the test subjects expected to feel those activities all over their bodies. In fact, Shen says that the technology wouldn't work well on areas other than the hands and mouth, where there isn't such a high density of nerve endings.
The scientists are now working on making the system smaller and lighter, and adding more haptic effects to its repertoire. They are presenting their work this Monday in New Orleans, at the Association for Computing Machinery's Conference on Human Factors in Computing Systems.
The system is demonstrated in the video below.
Sources: Carnegie Mellon University, Future Interfaces Group