Music

Google project creates hardware interface for algorithm sound generator

Google project creates hardware interface for algorithm sound generator
The NSynth Super interface can be played with any MIDI source, such as a DAW, sequencer or keyboard
The NSynth Super interface can be played with any MIDI source, such as a DAW, sequencer or keyboard
View 3 Images
The NSynth Super interface will not be released commercially, but the code, schematics and design templates are available for download under open source
1/3
The NSynth Super interface will not be released commercially, but the code, schematics and design templates are available for download under open source
The NSynth Super interface can be played with any MIDI source, such as a DAW, sequencer or keyboard
2/3
The NSynth Super interface can be played with any MIDI source, such as a DAW, sequencer or keyboard
After selecting sound sources using the dials on the NSynth Super interface, a musician can generate entirely new sounds by dragging a finger across the touchscreen
3/3
After selecting sound sources using the dials on the NSynth Super interface, a musician can generate entirely new sounds by dragging a finger across the touchscreen
View gallery - 3 images

The folks at Google's Magenta project have unveiled a hardware interface for an algorithm-based synthesizer that uses a deep neural network to generate completely new sounds. Musicians drag a finger around the NSynth Super's colorful touchscreen to explore the unique sounds offered up by the machine learning algorithm.

The NSynth Super interface offers musicians a way to go hands on with the Magenta project's NSynth – or Neural Synthesizer – algorithm, which uses a neural network to learn sound characteristics of different sources and generate completely new sounds from what it's learned. The project stresses that these new creations are not merely sonic blends, but entirely new sounds that would be difficult to produce using a hand-tuned synth.

The NSynth algorithm identifies and extracts 16 features from each sound source input – a sound's core attributes, or what makes a sound, erm, sound like it does. "These features are then interpolated linearly to create new embeddings (mathematical representations of each sound)," explained the project. "These new embeddings are then decoded into new sounds, which have the acoustic qualities of both inputs." So, for example, a newly generated sound might have both flute-like and sitar-like qualities, bit the new sound will be neither one nor the other. It will be unique.

While the Magenta project has been looking at ways to interact with the NSynth algorithm, the NSynth Super is the first hardware interface and was developed in collaboration with the Google Creative Lab. The interface can be played with any MIDI source, such as a DAW, sequencer or keyboard, and the algorithm can generate over 100,000 new sounds by drawing from different sources.

In an experiment, 16 source sounds across 15 different pitches were recorded in a studio and then fed into the NSynth algorithm. Each dial on the Super interface was assigned four source sounds. The characteristics of selected source sounds could then be manipulated using the touchscreen display up top to make novel sounds.

The project has created a few working prototypes of the NSynth Super and put them in the hands of musicians, but the hardware won't be released as a commercial product. However, all of the code, schematics and design templates are available to download from GitHub, should sonic scientists wish to make their own versions and experiment with new sounds. You can see the potential for music creation in the video below.

Source: NSynth Super project

Making music with NSynth Super

View gallery - 3 images
No comments
0 comments
There are no comments. Be the first!