Machine learning drives NSynth Super's new sounds of music
Look what Project Magenta people have been up to. After creating NSynth algorithm they have come up with a machine to serve as instrument, acting as the physical interface for the NSynth algorithm.
So what is the machine—and algorithm—all about?
First off, Magenta is a project in Google. It focuses on machine learning tools, helping artists create art and music in new ways. They got busy with Google Creative Labs to create the NSynth Super instrument. So now musicians can make music using sounds generated by the NSynth algorithm from four different source sounds.
NSynth Super has a touch screen: you can drag your fingers across to play sounds.
(NSynth is short for "Neural Synthesizer." NSynth uses a deep neural network to learn the characteristics of sounds, and create new sounds based on the characteristics.)
Google posted a video on Tuesday and you can see, and hear, the instrument in action. London-based producer Hector Plimmer explored sounds generated by the NSynth machine learning algorithm. I
NSynth Super can be played via any MIDI source, like a DAW, sequencer or keyboard. (DAW is digital audio workstation. MIDI stands for Musical Instrument Digital Interface.)
So how does this instrument actually work? In an experiment, 16 original source sounds across a range of 15 pitches were recorded in a studio and then input into the NSynth algorithm, to precompute the sounds. The outputs, over 100,000 new sounds, were loaded into the experience prototype. Each dial was assigned 4 source sounds. Using the dials, musicians can select the source sounds they would like to explore between, and drag their finger across the touchscreen to navigate the sounds that combine the acoustic qualities of the 4 source sounds.
At DJ Mag, the news was translated into a "new touchscreen hardware synth based on AI." Viewers were told that the "NSynth Super uses new technology to combine sounds and textures."
Declan McGlynn in DJ Mag looked at what makes this approach special. "NSynth interprets the sounds as numbers and mathematically generates a new series of 'numbers' that are then reconverted back into sound."
One could run amiss to assume this approach is just blending sounds. Chris Davies in SlashGear explained what is "new" about the sounds: "In short, it takes the core aspects of different sounds and then generates new timbres and dynamics that would be hard – or even impossible – to generate with a traditional analog or digital synthesizer."
What are some examples of how this plays out? McGlynn: "Sounds include a car's engine combined with a sitar, a bass sound morphing with thunder and much more. The result is a truly unique output, with the new hardware allowing users to move between four sounds on a colourful X/Y pad."
What's next? It is not as if you can walk into a store and find this on a shelf. Nonetheless, the code and design files are on GitHub. Chris Davies said, "Google is also releasing NSynth Super as an open source project. That includes not only the software side – which uses TensorFlow and openFrameworks – but the schematics and design templates for the hardware, too."
Davies added, your NSynth Super should be functionally identical even if it may not look as stylish as Google's, depending on how much you have to spend on materials, and what you're using to make the casing.
So, the open source version of the NSynth Super prototype including all of the source code, schematics, and design templates are available for download on GitHub.
GitHub's page said the Touch interface is a capacitive sensor, like the mouse pad on a laptop, to explore the new sounds that NSynth generated between your chosen source audio.
GitHub's page said the case for the electronics can be manufactured with a laser cutter, and held together with standard screws and fittings. This design is customizable using different materials, colors, dials, and shapes.
— www.blog.google/topics/machine … ed-machine-learning/
© 2018 Tech Xplore