Other research projects


Characterizing Comodulation Masking Release in Hearing-Impaired Listeners

Jonathan Regev, Research Assistant

Description: The project investigates the consequences of hearing loss on CMR. The aim is to identify the processes underlying the performance in CMR as well as possible limitations of listeners’ supra-thresholds abilities in each of these. Ultimately, the goal is to use these findings to include new key processing stages in current computational models of the auditory system.


The Effect of Hearing Loss on Sound Texture Perception

Oliver Scheuregger, Research Assistant

Sound textures offer an ecologically valid means of probing the auditory system, whose statistics are shaped by auditory processes. Previous studies have already explored perception of sound textures in normal hearing listeners, however no work has investigated how hearing impaired listeners perceive textures. This project will be a direct extension of his master's project, in which Oliver measured sound texture identification and discrimination performance in normal, impaired and aged listeners. The goal is to investigate if sound textures may be used to better understand and diagnose differences between normal hearing and hearing impaired listeners, with the ultimate goal of aiding in the development of new compensation strategies.


Jasper Ooster,Visiting PhD Student

Jasper Ooster is doing his PhD at the Medical Physics, University of Oldenburg where he is investigating the automatic conduction of speech intelligibility tests using automatic speech recognition (ASR). The ASR system is used for an unsupervised logging of the subject’s responses during the measurement. While he was able to show promising results for the unsupervised conduction of the German matrix sentence test (OLSA), where no loss in measurement accuracy was found, he will investigate during his stay at the DTU the suitability for automation of the Danish HINT as well as of the Dantale II as an exemplary evaluation of a multilingual ASR approach for the matrix sentence test.


Integrating the visual in AVIL
Kasper Duemose Lund, Research Assistant

Kasper is working on the integration of virtual reality in the Audio Visual Immersion Lab (AVIL). As of now, the AVIL does not have any permanent setup for providing visual stimulation to subjects participating in perceptual experiments. With this HTC Vive based virtual reality implementation, researchers have the framework for exposing subjects to 3D visual environments corresponding to the played back audio environments of the AVIL. A full synchronization of the existing audio engine and the new visual integration will be included. Audio-visual scenarios can therefore be presented in a highly controlled way. Conclusively, a protocol for constructing audio-visual perception experiments using this system will be developed.

Innovative Hearing Aid Research – Ecological Conditions and Outcome Measures
Sergio Luiz Aguirre, Early Stage Researcher in the HEAR-ECO project

This PhD project at Eriksholm Research Centre focuses on the reproduction of realistic sounds scenarios and how to apply them for the measurement of listening effort. The long-term goal is to create new tests for examining the benefit of hearing-aid technology on listening effort in an ecologically valid environment. This project is jointly overseen by the Hearing Sciences – Scottish Section research group of the University in Nottingham (William Whitmer and Graham Naylor) and the Eriksholm Research Centre (Thomas Lunner). Additional collaboration with Hearing Systems, DTU, that will explore new ways to create the adequate sound field will be performed in the Audio Visual Immersion Lab (AVIL).
HEAR-ECO is a project which aims to develop and combine new tools and outcome measures for realistic communication, and translating these tools into innovative developments and evaluations of new technology for those with hearing loss. At its core, HEAR-ECO is training a new team of researchers working at the nexus of technology, psychology, physiology and audiology. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie-Sklodowska-Curie grant agreement No 765329