Engineers at MIT’s CSAIL have developed a smart carpet that can accurately estimate a person’s movements or body pose without needing cameras. The system could be useful for exercise feedback, monitoring falls, or tracking for VR and gaming.
The prototype mat measures 36 ft2 (3.3 m2), and packs over 9,000 sensors made up of a pressure-sensitive film and conductive thread. Essentially, when weight is put on different parts of the carpet, different electrical signals are sent based on the amount of pressure, where on the mat that pressure is being applied, and the relative locations between pressure points.
The system was first trained on a synchronized combo of tactile input on the mat and corresponding video of people performing different actions such as walking, sit-ups, push-ups, yoga poses, sitting, lying down, rolling, or standing on their tiptoes.
Then, the pressure maps of these actions were assigned to virtual models of a person performing them, allowing the system to estimate a person’s body pose based solely (excuse the pun) on the pressure data. Even upper body movements can be inferred fairly accurately – the system can, for example, tell if a person is bending left or right based on which foot they’re shifting their weight to.
All up, that means the smart carpet can learn what, say, a lunge looks like without any input from a camera. The CSAIL team says that the system was 97 percent accurate in identifying specific actions, and could predict a person’s pose to within 10 cm (3.9 in).
“You could envision using the carpet for workout purposes,” says Yunzhu Li, co-author of the study. “Based solely on tactile information, it can recognize the activity, count the number of reps, and calculate the amount of burned calories.”
Other possible applications include monitoring the elderly for falls, helping injured people through rehab, or for tracking a player’s movements in VR or video games. It could be less cumbersome than wearable trackers, easier to set up than infrared sensors and more private than using cameras.
The team says that the carpet is also easily scalable and fairly low cost – the prototype was built for less than US$100. Next up, the researchers aim to find ways to collect more information from the signals, such as a user’s height or weight, and adapt it for multiple users at once.
The research was published in the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. The team demonstrates the smart carpet in the video below.
Source: MIT