AI improves speech understanding of hearing aid users

    -     Deutsch
Tim Fischer, Postdoctoral Researcher, Hearing Research Laboratory, ARTORG Center

Tim Fischer, Postdoctoral Researcher, Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern. © ARTORG Center

In noisy environments it is difficult for hearing aid users or people with a hearing implant to understand their conversational partner because current audio processors still have difficulty focusing precisely enough on specific sound sources. In a feasibility study, researchers from the Hearing Research Laboratory at the University of Bern and the Inselspital are now suggesting that artificial intelligence could solve this problem.

Hearing aids or hearing implants are currently not very good at selectively filtering specific speech from many sound sources for the wearer - a natural ability of our brain that works for normal-hearing people - which is known in professional circles as the "cocktail party effect". Accordingly, it is difficult for people with hearing aids to follow a conversation in a noisy environment. Researchers at the Hearing Research Laboratory of the ARTORG Center, University of Bern, and Inselspital have now devised an unusual approach to improve hearing aids in this respect: virtual auxiliary microphones whose signals are calculated by artificial intelligence.

Like an invisible microphone on the forehead

The more microphones an audio processor has available and the more widely they are distributed, the better a hearing aid can focus on sound from a particular direction. Most hearing aids have two microphones close together due to lack of space. In the first part of the study, the Hearing Research Laboratory (HRL) determined that the optimal microphone location (for better focusing) is in the middle of the forehead - though this is a very impractical location. "We wanted to get around this problem by adding a virtual microphone to the audio processor using artificial intelligence," said Tim Fischer, a postdoctoral researcher at HRL, explaining this unconventional approach.

Cocktail party data, a learning neural network, a subject group test

For the study setup, ARTORG Center engineers used the ÜBern Cocktail Party Dataset", a collection of a variety of noise scenarios with multiple sound sources from multi-microphone recordings of hearing aid or cochlear implant users. Using 65 hours of audio recordings (over 78,000 audio files), they trained a neural network to refine a commonly used directionality algorithm (beamforming). For improved speech understanding the deep learning approach calculated additional virtual microphone signals from the audio data mixture. 20 subjects tested the AI-enhanced hearing in a subjective hearing test accompanied by objective measurements. Particularly in cocktail party settings, the virtually sampled microphone signals significantly improved the speech quality. Hearing aid and cochlear implant users could therefore benefit from the presented approach, especially in noisy environments. The findings have now been published in the Science Direct Journal "Hearing Research ".

"I think that artificial intelligence represents an important contribution to the next generation of hearing prostheses, as it has great potential for improving speech understanding, especially in difficult listening situations," says Marco Caversaccio, Chief Physician and ENT Department Head.

As auditory assistive technologies and implants are a major focus of research at the Inselspital, important data-based foundations are being laid here for further development that should bring the natural hearing experience closer. The novel approaches will directly benefit patients within the framework of translational studies.

Outlook

Although within this study the virtually added microphones significantly improved the quality of speech understanding with hearing aids, further studies still have to overcome some technical hurdles before the methodology can be used in hearing aids or cochlear implant audio processors. This includes, for example, a stable functioning directional understanding even in environments with a lot of reverberation.

Hearing Research Laboratory

The Hearing Research Lab is a clinically oriented research cooperation between the ARTORG Center for Biomedical Engineering Research, University of Bern, and the Department of Ear, Nose and Throat, Head and Neck Surgery, of the Inselspital (University Hospital Bern). A multidisciplinary team combines the expertise of engineering, audiology, surgery and physics. The aim is to develop novel medical technologies that help hearing-impaired people and to support clinicians in the diagnosis and treatment of inner ear diseases. The range of research activities includes psychoacoustic experiments, anatomical and electrophysiological studies, the design and implementation of clinical software and medical devices, and carrying out observational studies and clinical trials. To promote sustainable research progress, the Hearing Research Lab actively collaborates with leading medical, academic and industrial partners in hearing research.

Hearing Research Lab


This site uses cookies and analysis tools to improve the usability of the site. More information. |