Using HPC to understand human hearing

Author: Guest blogger
Posted: 23 Jan 2015 | 14:58

The Auditory pilot project, involving EPCC and the University’s Acoustics and Audio Group, sought to use HPC to enable faster run times for computational models of the human hearing organ. Dr Michael Newton of the Group explains the work.

Such models are routinely solved in commercial environments such as Matlab, and can require many hours to complete a single simulation run. The Auditory project investigated ways in which these models might harness the power of HPC to allow a reduction in run times, so providing greater opportunities for the rapid development and use of such models in a range of research and clinical environments.

The human ear is tasked with converting acoustical sound waves in the environment into neural signals that can be interpreted by the brain. Sound arrives first at the outer ear, or pinna, which forms the most obvious visible component of the auditory system. The sound then travels into the ear canal and is focused onto the ear drum, or tympanic membrane, which forms part of the middle ear. This membrane helps to efficiently transfer acoustical vibrations in air to vibrations within the fluid-filled cochlea, or inner ear, which is coupled to the ear drum via the three smallest bones in the human body. The process of transduction itself, where mechanical vibration is converted into electrical signals that are carried away to the brain via neurons, happens within the cochlea. This step is the most complex and remarkable feature of this acoustical chain. 

The Auditory project was concerned with speeding up computational simulations of the acoustical-neural transduction process that takes place within the cochlea. 

This process relies upon the extraordinary morphology of the cochlea, which consists of a spiraled, fluid-filled tubular structure approximately 35mm long, and bisected along its internal length by the cochlear partition. The cochlear partition is formed from a complex array of membranes and cells, but can be thought of as a ribbon-like structure that is stiff along its length, and compliant in the transverse direction. It responds to pressure waves within the cochlea in a manner somewhat akin to a line of buoys bobbing up and down on the sea surface. One feature of this unique morphology relates to the variation in physical properties of the ‘ribbon’ along the cochlear length, which allows a precise place-frequency mapping to take place for incoming sound waves, and leads to the cochlea’s key role as a kind of frequency analyser. Having a good model of this mechanical structure, and understanding its role in the transduction process, lies at the heart of unpicking the subtleties of cochlear function.

One approach taken in modeling the cochlear transduction process is to discretise the cochlear partition along its length, and to represent each of these discrete ‘elements’ using a set of carefully tuned mechanical oscillators. The whole family of elements that make up the digital cochlea, perhaps 4000 or more, can then be coupled together by acoustical equations that describe sound wave propagation within the surrounding cochlear fluid. The place-frequency mapping seen in the real cochlea can be mimicked by careful tuning of each of the individual elements. 

The computational models that result from describing the preceding system consist of a large set of coupled differential equations. Various methods exist for solving these kinds of systems, including commercial solvers built in to platforms such as Matlab. 

The Auditory project explored a range of alternative environments, including the Open Source PETSc library. PETSc has functionality that is broadly similar to the Matlab solvers, but which allows greater control over the underlying solver routines, as well as extension to parallel architectures. This can come at the cost of reduced ease of use, compared to Matlab. At the other end of the solver spectrum, the project also explored the possibility of designing hand-coded algorithms, such as by using Finite Difference Time Domain techniques, to solve the cochlear equations. This latter approach has proved of particular interest, with some promising early results.

The Auditory project provided a valuable environment within which a range of approaches to solving cochlear models could be explored. Knowledge gained will now be put to use in further development of these models, and will inform forthcoming external grant applications. 

It is hoped that successful development of HPC tools tuned specially for such models may one day lead to their use as diagnostic tools within clinical environments.    

HPC challenges

Fiona Reid, Applications Consultant at EPCC, summarises the technical challenges of this project.

“This project was particularly challenging as there is no direct replacement for the ordinary differential equation solver used by the Matlab code. Our initial profiling of the Matlab code showed that almost the entire runtime (95% or more) was taken up with this solver and thus any attempts to parallelise the code would require a C/C++ replacement to be found. 

“Whilst solvers for ordinary differential equations are relatively commonplace, the number of possibilities are greatly reduced when a non-zero right-hand side with a time and space dependent mass matrix is present. This was the case with the Matlab code. 

“We found just two possible replacement solvers, PETSc and SUNDIALS. We focussed our efforts on PETSc as it allows for parallel computations using MPI.”

Images

Top: The response of a cochlea model to a single frequency input stimulation is plotted in both time and space. The plot reveals the way that a travelling wave is formed along the cochlea, which, upon reaching its ‘best place’, causes a maximal cochlear vibration at that spatial location. The cochlear motion beyond this location falls away rapidly, somewhat akin to a wave breaking on a beach. The cochlea is thus able to act like a kind of ‘mechanical’ frequency analyser, providing the brain with remarkably accurate time and frequency information from the very moment of transduction.

Above: A finite difference time domain simulation of the cochlear travelling wave. A single frequency tone is used to stimulate the cochlea via the ear drum (left side), and this wave propagates until it finds its ‘best place’ along the cochlear partition, the local properties of which closely match the stimulus frequency. This place-frequency mapping allows the cochlea to function as a kind of frequency analyser, amongst other things.

This work was funded by the College of Humanities & Social Sciences Challenge Investment Fund.