Although you can trace many of the general principles behind them back hundreds of years, synthesizers in the sense that we usually refer to them – instruments that create sound electronically – have only been around since the mid-20th century. In that time, somewhere around 60 or 70 years depending on what you count as the first true synthesizer, the technology used has progressed in almost unimaginable leaps.
On one level, the evolution of synthesizer technology goes hand-in-hand with advances in the wider world of tech. Bob Moog’s first commercial synthesizers were launched into a world where computers were only just moving from vacuum tubes to integrated circuits, five years before the first moon landing.
The decades that followed saw rapid advances in things like computing power, miniaturisation, speaker, screen and interface design, all of which have impacted the design of hardware and, latterly, software synthesizers. In the 2020s, the cutting-edge of synthesizer design is occupied by many of the same trends as wider consumer technology: wireless and portable design, cloud connectivity, machine learning and the potential of artificial intelligence.
There is something unique about the realm of synthesizers though, in that users and designers alike maintain a misty-eyed attachment to the designs and technologies used in those earliest commercial synths. This is analogous to cinema or photography, where certain practitioners stringently persist with working practices rooted in film, despite the obvious convenience of digital photography, or music listening, where the resurgence of vinyl flies in the face of streaming.
In all these cases, it would be inaccurate to say that those adhering to older forms are simply luddites. It comes down to the difference between creative and merely functional technologies. It’s hard to imagine anyone defending the design of, say, a vintage washing machine, since that is a functional device aimed at achieving a set result – cleaning clothes – better and more efficiently with each new development.
Artistic endeavours such as music or film-making can’t be boiled down so simply to a quest for efficiency. Many of the techniques and styles that came to dominate art and music in the 20th century went hand-in-hand with the specific technologies used to create them, and it’s not like these suddenly become obsolete with each new technological advancement.
One thing we can be fairly certain of, in that case, is that – at least in the medium term – the future of synth design is unlikely to be a total departure from what has come before. Broadly speaking, most of the most innovative and influential instruments of the past decade have been those that work new technologies into a framework based on classic synthesizer design.
UDO Audio’s Super 6 and Super Gemini are great examples of this; instruments that take a classic analogue synthesis design but innovate with the incorporation of a binaural signal path and high-resolution digital oscillators. We asked UDO founder George Hearn how he imagines synth design progressing in the near future.
“I like to think of a helix,” Hearn explains, “where trend and fashion repeats on one plane but evolves on another. I think we’ll see more physical modelling and powerful control over these engines, but I think the synth staples of subtractive, wavetable etc, will be here in increasing levels of refinement.
“Before the 18th and 19th century, most classical instruments had not yet standardised into their forms we know today, such as the clarinet, cello, or trumpet. I think we will see genres of electronic instruments further crystallising in a way that General MIDI promised back in the ’80s. Even now we might say ‘VCO synth’ or ‘wavetable synth’, terms such as these will become more meaningful and refined. I use some of the same electromechanical UI element designs that were introduced in the 1970s, and today’s alternatives are rarely better.”
Sound Particles’ recent SkyDust 3D is an example of similar innovation in the software realm. At first glance, the plugin looks a lot like many other ‘power synths’ we’ve seen in recent years, with a range of multifunctional oscillators, multimode filters, FM capabilities and freely routable envelopes and LFOs. What sets SkyDust apart, however, is the fact that it has spatial audio at the heart of its design.
Sound Particles CEO Nuno Fonseca, is confident this will be the next big trend in synthesis. “We will continue to have multiple variations on the classic methods of synthesis, but for me, the next trend will be spatial synths, and sound designers will have to explore it,” he explains. “The future of music will be spatial, and spatial sound design is a must. Even with stereo, we were not taking advantage of space, but with 3D formats we cannot escape it. We need to sculpt and design sound in 3D.”
Given its recent adoption by Apple, who have made spatial audio a key feature of both Apple Music and Logic Pro X, it’s not hard to envisage that this prediction might prove accurate. That said though, while 3D audio has an increasing presence in areas such as cinema, game and large-scale venues, when it comes to music, most tracks are still listened to across a wide variety of devices, from headphones to portable speakers to car stereos, and it’s unlikely that spatial audio will become the standard across all of these, at least in the short term.
So where else can we expect to see innovation in synthesizer design? Another area that many brands are putting considerable focus on is expression. In recent years, UK brand Roli have undoubtedly had a significant impact here, spearheading the wide adoption of MPE (MIDI polyphonic expression), through the development of the Seaboard controller range.
Much of the original buzz around these MPE devices came from their ability to replicate the expressiveness of acoustic instruments, but the extra level of expressive control is, arguably, even more potent as a tool within electronic sound design as demonstrated by Roli’s own Equator2, Strobe2 and Cypher2 synths (developed by FXpansion, who Roli acquired in 2016.)
This level of increased expressiveness is no longer a niche interest within the electronic instrument world. Ableton’s recent Push 3 controller puts multidimensional expression front and centre of the workflow of a clearly mass-market controller, while MPE support is increasingly being incorporated into hardware synths such as the Super 6 and ASM’s Hydrasynth.
Osmose from French brand Expressive E, is another instrument that’s notable for putting expression at the heart of its design. Arnaud Dalier of Expressive E explains why: “In our opinion, interactive technologies are very likely to play an essential role in synth design. Quite simply because they enable us to return to the physical relationship with sound so dear to the world of acoustic instruments.
“Offering profound possibilities of interaction with the world of synthesis means somehow reconnecting with a certain magic where creation becomes more intuitive, where emotions are expressed more directly, where exploration, accidents and imperfection intertwine.”
This approach to instrument design, using new technological advancements to enhance the way musicians interact with the sound engine, is likely to be a key focus of both synthesizer and MIDI controller design in the coming decade. “Where I would like to see development is fantastic touch screens with haptic feedback,” says UDO’s George Hearn.
Touchscreens are likely to play an increasing role in how we interact with synths in the near future. Touchscreen equipped synths are nothing new – Moog’s Voyager made use of one back in the ’00s, and Waldorf’s Quantum and Iridium both feature a touchscreen interface. With Logic Pro now available on the iPad, bringing with it AUv3 compatibility, we’re likely to see a rapid increase in plugins that make better use of mobile interfaces.
Hearn’s vision for deep control goes somewhat further than this though, as he explains: “Our instruments at UDO Audio allow you to engage with your sound, grab it, manipulate it and lose yourself in sonic adventures. Part of how they achieve this is by providing a simple sound engine and concept with immediate control.
“I would love to somehow take this paradigm to an alternative reality where I can control an orchestra at my fingertips, using a slider to change the size of all the string instruments at once, or gradually turn the entire woodwind section into brass, and so on. Maybe then I would like to move a dial and change the size of the concert hall and place a solo voice in the box behind me. It’s all possible today, but the interfaces we have to this kind of sound-sculpting power have not caught up. In fact, I might go away and put some thought into this one.”
Beyond these forms of interaction, there’s also no doubt that further advances in computing power will have a significant impact on the future of synthesizer design. Kirkis, head of experimental electronic instrument brand Destiny Plus, picks up on this: “Realistically, smaller footprints and powerful computing is certain. I would like to say user interface [will change], however our history and attachment with certain interfaces would challenge that.
“Perhaps a more sophisticated way of synthesising real time physics? How much computing would I need to synthesise the sound of a Steinway falling off the Empire State Building morphed with the snapping of a leg off a chair? Neural synthesis may also change the way we compose.”
Artificial intelligence and neural networks – computing networks modelled on the human brain – will undoubtedly have a role to play in the future of music software. We are already seeing the development of tools that look to synthesise sounds based on these kinds of technology. The most notable right now are probably Google’s recently released text-to-music generator MusicLM and OpenAI’s JukeBox.
Both of these tools use machine learning to generate full audio clips based on text prompts (for example ‘lo-fi hip-hop beat’ or ‘chill synthwave’). Results are far from being perfect every time, but are genuinely impressive and can be thoroughly convincing in small doses. While doomsayers will say these sorts of technologies spell the end for artistic creativity, there’s plenty of evidence to the contrary.
patten’s impressive Mirage FM, released earlier this year, demonstrates how a genuinely creative and emotive album can be woven entirely from the slightly uncanny sounds generated by text-to-audio AI (patten used MusicLM precursor Riffusion). In a similarly creative space, US musician Grimes has unveiled an AI-powered tool designed to let musicians replicate her own voice in their music.
What will be interesting is seeing how synth designers make these various trends fit together. AI that can create any sound you describe is great, but how do we make a text input like that gel with the expressivity of something like Osmose? Having a tool that can translate your exact intention into sound is undoubtedly powerful, but fails to account for how much inspiration is found in the space between intention and results; ‘happy accidents’ that can inspire new ideas.
What might synthesis look like in the far future then? “All of biology is composed of about 20 amino acids, and by composing them you can create all of life,” Kirkis of Destiny Plus posits. “It could be interesting to redesign the paradigm of components for analogue design into a handful of components. Biology and biosynthesis is somewhat underexplored...”