Probabilistic deep learning for hearing aid speech separation

Rasmus Malik Thaarup Høegh

One of the most sought after improvement to hearing aids is improved intelligibility of speech in noise. Speech separation is the process of separating speech from interfering sound, and the performance of speech separation algorithms has seen advances with the introduction of deep learning methods. Current deep learning systems are, however, poor at performing in scenarios with speakers and noises that the system did not see during training. Modelling uncertainty - through probabilities - can help models learn from less data and generalize better, and so the project explores enabling deep learning models to perform well in unseen scenarios using probabilistic modelling. The project investigates probabilistic deep learning methods (such as variational autoencoders) and develops these methods for unsupervised representation learning of sound signals. The project is an industrial PhD at WS Audiology (supervisors J. B. B. Nielsen and A. Westermann) supported by Innovation Fund Denmark. It is done in collaboration with DTU Compute Section for Cognitive Systems (M. Mørup and L. K. Hansen) and DTU Health Tech Hearing Systems (A. A. Kressner).

Supervisors: Morten Mørup (Section for Cognitive Systems, DTU Compute), Jens Brehm Bagger Nielsen (WS Audiology, Research and Development), Abigail Anne Kressner (Hearing Systems,DTU Health Tech), Adam Westermann (WS Audiology),Lars Kai Hansen (DTU Compute)

To be completed in 2022

DTU Orbit

Contact

Rasmus Malik Thaarup Høegh
Industrial PhD
DTU Compute
+45 51 92 71 76