The smallest gesture can hide a world of meaning. A particular flick of a baton and a beseeching gesture can transform the key moment of a concert from mundane to ethereal. Alas, computers are seriously handicapped in understanding human gestural language, both in software and hardware. In particular, finding a method for describing gestures presented to a computer as input data for further processing has proven a difficult problem. In response, Microchip Technologies has developed the world's first 3D gesture recognition chip that senses the gesture without contact, through its effect on electric fields.
Why are normal human gestures so difficult to translate into a form suitable for computers? The meaning of a gesture is not a simple hand position or path of motion, but a gestalt – a approximate summary of the entire gesture. It is a task nearly defying description to decipher the intended meaning of a gesture from data acquired by tracking the movement of every portion of each finger and joint of the hand and wrist. This is one of the main reasons that, despite at least 30 years of effort, artistry has consistently eluded any computer-based orchestra controlled by a human conductor. Despite this, gesture controlr emains an active area of development, because of the enormous market that awaits a practical system.
Microchip Technologies has recently unveiled their GestIC technology as implemented in the soon-to-be-available MGC3130 chip, an outgrowth of an earlier technology. When used as a 3D digitizer, the MGC3130 resolves position within a 15 cm (6 in) cube at a remarkable resolution of 150 dpi. (Yes, that's vertical resolution as well as in the plane, meaning that roughly a billion voxels (3D pixels) can be distinguished within the scanning volume.) The sampling rate is 200 measurements per second, allowing the GestIC technology to follow quick adjustments of hand and finger positions, velocities, and accelerations.
The MGC3130 enables a new approach to the problem of human-machine interfacing (HIM), recognizing gestures by measuring the changes in an electric field as the gesture is made. When gestures are sensed via their effect on electric fields, the step of precisely measuring hundreds of positions for each millisecond of a gesture and converting that data into a concise description of a gesture is no longer needed. Instead, a vastly simpler procedure can be adopted. The output of an electric field-based gesture sensor is itself something of a gestalt of a gesture, which has the potential to greatly simplify the interpretation of gestures.
GestIC technology detects gestures through the changes which appear in a circumambient electric field. The chip generates an excitation voltage having a frequency around 100 kHz. The excitation voltage is applied between the transmitter electrode and a ground plane (in commercial practice, the function of the ground plane will be taken by the device using GestIC technology). This sets up an electric field that extends from the transmitter electrode into the scanning region above the electrodes. As the wavelength of the excitation voltage is far larger than the size of the electrodes, the electric field is quite uniform through out the scanning region.
When a user reaches into the scanning area, the electric field changes in response. Electric field lines must approach a conducting body perpendicular to the surface of that body. This is shown in the image above, where the field lines which pass near the hand are shunted to ground through the conductivity of the human body itself. (The person operating the device must be grounded to the ground plane.) The position of the hand within the sensing volume causes a compression of the equipotential lines and reduces electrode signal levels.
Instead of producing a scanned map of points on the surface of the hand, however, the MGC3130 measures a small number of analog voltages – the five voltage differences between the various electrodes and the ground plane. This analog data provides a highly compressed signature of the gesture. It can't be used to uniquely model the position of the hand, as there is not enough redundancy in the data. Despite this, this data can be used to accurately identify gestures.
A given gesture always produces the same signature, and gestures close to the given gesture will produce similar signatures, as will the same gesture being presented by a user with a larger or smaller hand. (This is equivalent to saying that mapping electric field gesture detection onto actual gestures is mathematically a continuous function.) As the system is now dealing with tens of bits of data instead of thousands of bits of data, the job of recognizing patterns associated with particular types of gestures becomes far easier, in analogy to the image preprocessing which occurs in the retina before the processed data is presented to the visual cortex.
Imagine that a user places their hand in the sensing volume, and then curls their thumb and forefinger together in an "A-OK" gesture. The GestIC sensor produces five voltages which are characteristic of that gesture. It doesn't know or care (speaking anthropomorphically) that the thumb and forefinger are touching and the other three fingers are splayed outward. Neither can a computer determine the position of the hand by analysis of the five voltages: the detailed position information is simply not in that data.
Instead, the sensor's MPU says to itself "the voltages swooped about pretty quickly with time and then settled down into a new pattern. I guess this is a new gesture. Let's compare the sizes of the present voltages with a bunch of patterns of standard gestures in my memory. Hmmmm. These voltages seem to match pretty well with a slightly rotated "A-OK" gesture – at least, better than anything else in my recognition patterns. Don't know what that means, but I'll send my decision over to be used as input data by the rest of the program."
The operational software that emulates this inner dialog is part of the Colibri software suite that supports the chip. Comparison and recognition of input patterns is carried out by a stochastic Hidden Markov model analysis that is preprogrammed with a reliable set of standard 3D hand and finger gestures (no, not that one) that can be easily employed in their products. Examples include position tracking (essentially digitizing the position of a fingertip), flick, circle, and symbol gestures, and many more. A system can also be activated from a standby condition by a stylized gesture. The Colibri suite allows developers to rapidly and inexpensively incorporate GestIC technology into products, as the programming for a basic human-machine interface has been provided.
At present, the MGC3130 chip will be supplied in the 28 lead 5x5-mm QFN form factor. The frequency of the electric field is variable between 70 and 130 kHz, and the firmware on the 3130 chip enables frequency hopping to substantially eliminate RF interference. The power requirements are very small, about 100 milliwatts while actively detecting and processing gestures, about 150 microwatts in standby mode, and about 30 microwatts in a deep sleep mode. Both an MPU processor and a firmware version of the Colibri software suite are integrated on the one chip.
Only a set of electrodes and eleven discrete electronic components are required for full operation of a GestIC system. This circuitry is provided in Microchip’s Sabrewing MGC3130 Single Zone Evaluation Kit. The Sabrewing comes with selectable 5" or 7" electrodes and the AUREA Graphical User Interface, which allows designers to easily match their system commands to Microchip’s Colibri Suite (also included). The evalutation kit costs US$169.
Let's imagine an application perhaps two generations down this development path. In front of you appears a somewhat larger set of electrodes. The transmitter electrode delivers an electric field of two different frequencies, while your right and left arms are connected to the ground plane through filters. In this way, gestures of your right hand can be separated from gestures of your left hand. The application is a 3D sculpturing program, in which virtual clay is formed by the motions of your hands and fingers. The virtual clay could be spinning for throwing pots, or fixed for more traditional sculpture. Once you obtained a pleasing sculpture, the program would send the description to a 3D printer or CAM system to fix it in the sculpting material of your choice.
Such machines might be available not only for professional sculpture, but at a suitable price point might be used to encourage artistic talent and imagination in children. Who knows, perhaps even artists of other species (elephants, apes, etc.) might benefit from this new technology…
The video below presents a panoramic overview of the GestIC system.
Source: Microchip Technologies