AVIL (Photo: Torben Nielsen)

Improved compensation strategy for hearing aids

Tuesday 19 Sep 17


Tobias May
Associate Professor
DTU Health Tech
+45 45 25 39 59
Hearing aids amplify all the sounds around us – even the ones we don’t want to listen to. New research makes it possible for hearing aids to distinguish between direct sound components we want to hear, like a person’s voice, and unwanted room reflections.

Hearing-impaired listeners typically have difficulties in hearing soft sounds. An important function of a hearing aid is therefore to amplify sound and make it audible. Today a hearing aid amplifies all sounds, which means that a user of a hearing aid hears not only the amplified voice of a talking person, but also the amplified reflections from the walls. In a small living room, the reverberation may be tolerable, but in restaurants, churches, railway stations, and other larger spaces this reverberation can make listening quite challenging. In addition, amplifying the reflections distorts the perceived spatial location of sound sources, which can make it more difficult for a listener to determine where a sound is coming from. 

A group of researchers from the Technical University of Denmark (DTU) has developed a new algorithm for amplifying only the voice and not the reflections. The algorithm aims at restoring audibility of hearing-impaired listeners while preserving the spatial perception of sound sources. 

”Our method means that it will be easier for a user of a hearing aid to understand what another person is saying and to decide where in the room the sound is coming from,” says Tobias May, assistant professor from Hearing Systems at DTU Electrical Engineering who is one of the researchers behind the new method. 

Interest from hearing-aid industry

"Our method means that it will be easier for a user of a hearing aid to understand what another person is saying and to decide where in the room the sound is coming from"
Assistant Professor Tobias May, DTU Electrical Engineering

The new research has been published in The Journal of the Acoustical Society of America and presented at one of the biggest conferences on hearing-aid technology. 

“Several hearing aid companies have shown great interest in our results, which we will now refine. The next step will be to test the method with competing voices from multiple persons talking at the same time. Because the new algorithm does not distort the perceived location of sound sources, our hope is that it will improve speech understanding in everyday listening situations”, explains Tobias May. 

If the results of the next tests are just as good, the perspectives are promising for future users of hearing aids. The method of the researchers from DTU is capable of running in real-time and could be implemented in some current hearing aids. 

Audio-Visual Immersion Lab (Photo: STAMERS KONTOR)

Photos: AVIL laboratory (Audio Visual Immersion Lab), at DTU Electrical Engineering. Here it is possible to simulate acoustic signals in different environments and test the interaction with spatial hearing.