All posts by admin

Travel grant for Nawal CIAP 2017

Nawal El Boghdady received a travel grant from CIAP 2017 organization. She will present our collaborative work with Waldo Nogueira and Florian Langner from Hannover Medical Center and co-funded by Advanced Bionics, “Improving speech perception in cocktail-party situations for cochlear implants.”

Leanne’s talk at AMLaP 2017

Leanne Nagels’s submission “Lexical access in cochlear implant users” to AMLaP 2017 is upgraded to a podium talk. The work was supervised by Dr. Anita Wagner and conducted in collaboration with Prof. Roelien Bastiaanse (Neurolinguistics). Congratulations!

9 June 2017: Prof. Dr. Hartmut Meister, Univ of Cologne

Assessment of audiovisual speech recognition in cochlear implant recipients- why and how?

Date: 9 June 2017, 14:00
Location: P3.270 (near KNO Department)

Broadcasting link: https://tinyurl.com/09-06-2017-Auditory-Seminar

Prof. dr. Hartmut Meister
Head Audiology Research
Jean Uhrmacher Institute for Clinical ENT-Research
University of Cologne

In their early days, cochlear implants (CI) served as aids for lip-reading. Due to technical and medical progress and the development of elaborated rehabilitation programs many CI users show near perfect speech understanding without visual cues these days. Nevertheless, audiovisual (AV) speech is still important since visual cues are generally helpful in every-day communication. Thus, assessing different CI-processing schemes or fittings using AV speech reveals high ecological validity. Moreover, CI recipients typically show better lip-reading abilities than their normal-hearing peers and AV integration might be different in these populations.

However, assessing AV speech recognition is not a simple matter since validated speech material is scarce and establishing an AV speech corpus is costly and time-consuming. An alternative approach is using common speech-audiometric material and supplementing the visual modality by applying an avatar.

I will give an overview of our experience with the use of avatars during AV speech assessment, discuss opportunities and limitations and give examples for the implementation in cochlear implant research.

VICI grant

Our VICI proposal “It takes two to communicate: Voice perception and linguistic content” is accepted! Since it was a group effort a huge congratulations is in order for the entire group. The project will have a fundamental science part where we will study the interactive connection between voice perception and speech communication, and an applied part where we will do so in the context of hearing impairment. We will take advantage of our existing tools, combined under PICKA (Perception of Indexical Cues in Kids and Adults). To top it all, we will also other tools, potentially very much fun and effective, such as using a NAO robot. Stay tuned for exciting new details!

Symposium at ARO 2017

Our symposium proposed in collaboration with Dr. David Landsberger (NYU)  was accepted at ARO 2017. The title is “Symposium: Auditory implants: Improving auditory function from pre-processing to peripheral and central mechanisms.”

Christina Fuller and music training with cochlear implants in the news

Dr. Christina Fuller has recently completed her PhD work on music and cochlear implants. Part of her research was on providing music training to cochlear-implant users, to explore what hearing benefits different training approaches may provide to this group. This also lead to a wonderful collaboration with the Prins Claus Conservatorium, where Dr. Robert Harris provided a pilot training to a small group of implant users.

These efforts are featured on De Kennis van Nu:
https://dekennisvannu.nl/site/artikel/Muziekles-voor-doven-om-beter-te-kunnen-horen/8675

 

 

Our research featured at Scientific American

A potential transfer-of-training advantage musicians may have in speech perception remains an elusive topic. Our recent research on musician effect on speech perception (Başkent, D., and Gaudrain, E., 2016, Musician advantage for speech-on-speech perception, JASA-EL 139, EL51-EL56) has shown a strong musician effect for speech on speech perception (single target talker and single masker talker). We had also manipulated the difference in the voices of the masker and target speech, as we suspected the root of musician effect to be better perception of voice cues.  This idea, however, did not seem to hold, leading to many more questions on where this advantage may be coming from.

This research is now featured at a fun Scientific American podcast:
https://www.scientificamerican.com/podcast/episode/bring-a-musician-to-untangle-cocktail-party-din/