Gesture-tracking tech allows kids to conduct Blob Opera with a wave
Toward the end of 2020, digital artist David Li collaborated with Google's Arts and Culture lab to release a fun machine learning experiment called the Blob Opera. Now Stuck Labs has developed a touchless interface that allows young conductors to control the operatic action with a wave of their arms.
Stuck Labs is the innovation arm of Singapore's Stuck design studio, and is made up of hardware and software engineers, fabricators, scientists and designers. Back in January, the team created an elevator button concept that tracks the movement of a finger as it approaches and allows for the button to be pressed without physically touching it, and so Kinetic Touchless was born.
Then in March, Stuck Labs turned its attention to the kind of sliding doors seen in shopping malls, hotels and airport lounges. Rather than just opening as someone approaches, and letting out any heat from inside while also allowing cold air to flood in, the designers created a sensor that could open the door with a wave of the hand. Once through the gap, the door could then automatically close.
The third iteration of the Kinetic Touchless sensing technology, the details of which Stuck Labs is keeping close to its chest, was inspired by the clever and very addictive Blob Opera experiment created by David Li and the Google Arts and Culture lab.
The browser experiment allows users to control colorful characters onscreen, dragging each one up or down to alter operatic pitch, or forward/back to change the singer's mouth shape to voice different vowel sounds. And performances can be recorded for playback.
But rather than using a computer mouse, Stuck Labs decided to make controlling the Blob Opera more realistic and engaging by developing a touchless interface that's able to track the gestures of young conductors at the podium and have the AI-driven vocalists perform to the wave of a hand, the raising of an arm, and so on.
"Kinetic Touchless 3.0 mirrors and enlarges one’s body motion to give children the power to physically conduct virtual opera blobs," Stuck Labs explained. "By correlating the natural movements of conducting to the controls needed to operate Blob Opera, Kinetic Touchless 3.0 gives you the ability to conduct as you would a real-life orchestra. Practicing conducting won't have to be purely just an arm workout, nor will playing with Blob Opera be only a two-dimensional experience."
You can see some delighted kids conducting the Blob Opera to create a Touchless Symphony in the video below.
Update September 20: Kevin Yeo from Stuck Labs has offered us a bit more insight into the project:
"Kinetic Touchless 3.0 retains the essence of its two predecessors – sensing the user's gestural input and providing a touchless interface to the said interactive touchpoint. The main difference is the methodology used to sense and interpret the inputs. In the first two instalments, the interaction occurs on a one-dimensional level, where the user either moves their hand in-to-out or left-to-right in a linear motion. In Kinetic Touchless 3.0, the interaction has been upgraded to accept two-dimensional data derived from the user's hands. The system encased in the conductor stand is able to pick up the left-to-right motion and the up-to-down motion of the user's hands. With these two-dimensional data, we are able to remap these coordinates into input signals to control the interaction.
In Kinetic Touchless 3.0, we had two main interactional challenges. Firstly, the mental model to control such interfaces is different for every users. At the start, we accounted for the different heights of the user and allowed for a more adaptive system where the system would 'self zero' based on the user's height. Unfortunately, this form of algorithm created a less-than-optimal interaction experience that did not work well with everybody. Eventually, we fell back on a more simplistic mapping algorithm. Yet another great anecdote to Occam's razor. The other challenge was on how should we interpret the data from the user's two hands. Options like only using the positional data from only the right hand, when both hands are present, and discard the left hand's data have crossed our mind too. In the end, we went for a hybrid combo – if only one hand is present, a simplistic mapping function will be used, and if two hands are present, the left hand will act as an origin and the distance between the left and right hand will be used as the input to the mapping function.
Overall the whole project as very fun and interesting, we truly enjoyed every step of creating this new interaction!"
Source: Stuck Labs