Next Generation Sound Synthesis

Author: James Perry
Posted: 13 Aug 2013 | 14:16
When you think about applications for high performance computing and large-scale simulations, you probably think of particle physics, or climate modelling, maybe molecular biology. You probably don't think of music. But the Next Generation Sound Synthesis project (NESS) may change that.
Up until now, most digital sound synthesis has either used primitive abstract methods (such as additive synthesis and FM synthesis) or used combinations of pre-recorded samples to generate music. These methods are computationally cheap but they have their limitations - notably, they don't always sound realistic, they can be hard for musicians to control, and they lack flexibility. A newer method, physical modelling synthesis, promises to overcome these limitations - but at the cost of being much more computationally intensive.
In the European Research Council-funded NESS project, researchers from the Acoustics Group at the University of Edinburgh have teamed up with EPCC to further develop physical modelling synthesis, using HPC techniques to overcome the computational barriers. The goal is to generate the highest quality synthetic sounds possible, with GPU (graphics processing unit) acceleration to help keep run times manageable.
The project will focus on six areas, covering a range of different musical instrument families:
  • Brass instruments
  • Electromechanical instruments
  • Nonlinear plate and shell vibration
  • Modular synthesis environments
  • Room acoustics modelling
  • Embeddings and spatialisation
The computational difficulty of these problems varies widely; from simple linear 1-dimensional models that can easily run in real-time on a single processor, to 3D models of large spaces that are not feasible to run at all on current hardware due to memory constraints. However, the large problems are very well suited to GPU acceleration as they mostly involve performing the same simple operations over and over again on many different data items - exactly what GPUs are good at.
This sound sample was generated by passing a guitar sound through one of the 3D room models being developed for NESS:
The NESS project started in January 2012 and will run for a total of five years. Several acoustic models, including plates, timpanis and whole rooms, are under development and more are planned. An interface is also being developed to allow visiting composers to make use of the models as easily as possible.
You can read more about the NESS project in EPCC News no. 71.
The images show a simulation of a timpani drum, created as part of NESS. The effects of a drum strike are modelled over time, clearly highlighting the reverberation of the soundwave inside the drum chamber (leading to audible “cavity modes”), as well as the full spatialized radiation pattern of the virtual instrument. Image: Adrian Mouat & Stefan Bilbao.


James Perry, EPCC

Blog Archive