MusIC: Making Music More Accessible for Cochlear Implant Users
Listening to music is an important and enjoyable part of many people’s lives that can improve quality of life through its recreational and rehabilitative function. For example, music involvement is important for the development of the brain, particularly during childhood and can mitigate symptoms of Alzheimer’s disease in older. Yet, people with hearing impairment have severe limitations to perceive and appreciate music.
This project focuses on increasing music appreciation for cochlear implant (CI) users. Moreover, it focuses on individually adjustable settings to improve the perception and appreciation of music.
The main goal of this project is to enhance the signing voice of music as previous studies have shown that this enhancement improves music appreciation for CI users. In contrast to state-of-the-art hearing devices which are designed to fit everybody (one size fits all) this proposal takes into consideration the individual user. For this purpose, deep neural network will be used to apply the desired user preference such that music can be enhanced according to the individual’s needs.
The new technologies are considered to perform in real-time and to be evaluated under a controlled virtual reality environment and also under a real sound scenario. This allows characterizing the individual CI user from the periphery to the central auditory system.
Music Source Separation Using Deep Neural Networks:
Music Samples of the Real-time Music Source Separation experiment. In this Experminet a multilayer perceptron has been used to separate the singing voice from the intruments accompaniment to remix the music track. It has been schown that CI users enjoy music better when the singing voice is ehnaced with the respect to the back ground instruments. Our results indicate that CI users enjoy music better when the singing voice is 8 dB enhanced with the respect to the background accompaniment.
Bellow is a demo of the music tracks used in this experiment.
|Music Track||Example 1||Example 2|
S. Tahmasebi, T. Gajȩcki, W. Nogueira. Design and Evaluation of a Real-Time Audio Source Separation Algorithm to Remix Music for Cochlear Implant Users. Frontiers in Neuroscience. 14, 434. doi: 110.3389/fnins.2020.00434. Epub 2020 May 14.
W. Nogueira, A. Nagathi, R. Martin. “Making Music More Accessible for Cochlear Implant Listeners: Recent Developments,” in IEEE Signal Processing Magazine, vol. 36, no. 1, pp. 115-127, Jan. 2019, doi: 10.1109/MSP.2018.2874059.
T. Gajęcki, W. Nogueira. Deep learning models to remix music for cochlear implant users. The Journal of the Acoustical Society of America, 143(6), 3602-3615. June 2018.
J. Pons, J. Janer, T. Rode, W. Nogueira. . Remixing music using source separation algorithms to improve the musical experience of cochlear implant users. The Journal of the Acoustical Society of America, 140(6), 4338-4349. December 2016.
|Head of Research Group:
||[Prof. Dr.-Ing. Waldo Nogueira]|
||DHZ-Deutsches HörZentrum Hannover
|Phone:||+49 (0)511 532 8025|
|Fax:||+49 (0)511 532 6833|