Category Archives: Auditory seminars

Combined Auditory/Epigenetics Seminar 5 April 2018: Prof. Marianne Rots,UMCG

Cellular Reprogramming by Epigenetic Engineering

Date: 5 April, Thurs
Time: 13:00 Lecture
Location: 3215.0165
(faculty building ADL1, entrance from Antonius Deusinglaan 1)

We have a combined auditory/epigenetics lecture next week, 5 April, Thursday, given by Prof. Marianne Rots from UMCG. It will be an informal lecture, and anyone who wants to learn more about epigenetics is welcome to join!
Similar to other seminars, audiologists can get credit for participation. Dissimilar to other seminars there will be no broadcasting, and also sign-up required, so please do contact me with a quick yes if you want to join (deadline 28 March)!
If time left a lab tour may follow!

Auditory Seminar 19 January 2018: Dr. Carlos Trenado, University Hospital Düsseldorf, Germany

Corticothalamic feedback dynamics for attention and habituation and its application in tinnitus decompensation

Date: 19 Jan 2018, FRIDAY, 14:00 hr
Location: UMCG, Onderwijscentrum, Lokaal 13

Broadcasting link: https://tinyurl.com/19-01-18-AudSeminar

Dr. Carlos Trenado
Institute of Clinical Neuroscience and Medical Psychology, University Hospital Düsseldorf & Dept. of Psychology and Neurosciences, Leibniz Research Centre for Working Environment and Human Factors, Technical University Dortmund, Germany

 

6 October 2017: Dr. David Ryugo, Garvan Institute of Medical Research, Australia

The Auditory Nerve: Structure, Function, and Plasticity

Date: 6 October 2017, FRIDAY, 14:00 hr
Location: 3215.0165

Broadcasting link: https://tinyurl.com/06-10-2017-Auditory-Seminar

Prof. Dr. David Ryugo
Hearing Research
Garvan Institute of Medical Research
Sydney, Australia

All sound in the environment accesses the brain by way of the auditory nerve. This nerve is primarily composed of neurons with myelinated axons that innervate inner hair cells of the cochlea. In order to make sense of sound, neural activity must be closely linked in time to acoustic events. The auditory system has mechanisms to accomplish this task that will be discussed in this presentation. Each auditory nerve fiber forms a giant terminal in the brain with many synapses, and these terminals, called endbulbs of Held, have been observed in every land vertebrate examined to date. I will explore their specializations in hearing, their pathologic reactions to deafness, and their salvation by cochlear implants.

27 June 2017: Dr. Robert Harris, Prince Claus Conservatoire

Action-oriented predictive processing: grasping the aural world 

Date: 27 June 2017, 14:00
Location: UMCG, room P3.270 (near KNO Department)

Broadcasting link: https://tinyurl.com/27-06-2017-Auditory-Seminar

Dr. Robert Harris
Lifelong Learning in Music
Hanze University of Applied Sciences
Prince Claus Conservatoire

Current models of brain function indicate that sensory input is not only processed in two anatomically and functionally separate pathways, but that perception is the product of a predicting brain and not purely a representation of the input to which it has access. Sensory modalities are furthermore intertwined, making not only synesthesia possible in rare instances, but also the expropriation of neural resources as in the SMARC effect. The use of instrumental music training to enhance the hearing of cochlear implant recipients builds on these models by promoting the implicit acquisition of ideomotor associations between musical pitch, tone color, volume, and hand movement. 

 

23 June 2017: Dr. Lars Riecke, Univ. Maastricht

Neural entrainment to speech modulates speech intelligibility ?

Date: 23 June 2017, 15:00
Location: UMCG, Panoramazaal, U4.123

Broadcasting link: https://tinyurl.com/23-06-2017-Auditory-Seminar2

Dr. Lars Riecke
Department of Cognitive Neuroscience
Faculty of Psychology and Neuroscience
University of Maastricht

Speech entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is ubiquitous in current theories of speech processing. Associations between speech entrainment and acoustic speech signal, behavioral listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented clarifying whether speech entrainment functionally contributes to speech intelligibility. Here, we addressed this issue by experimentally manipulating speech entrainment in the absence of systematic acoustic and task-related changes with a novel approach that involves stimulating listeners with transcranial currents carrying speech-envelope information. Results from two experiments involving a cocktail party-like scenario and a listening situation devoid of acoustic envelope information show consistently an effect on listeners’ speech-recognition performance, demonstrating a causal role of speech entrainment for speech intelligibility. This finding supports entrainment-based theories of speech comprehension and suggests that transcranial stimulation with speech envelope-shaped currents can be utilized to modulate speech comprehension.

 

9 June 2017: Prof. Dr. Hartmut Meister, Univ of Cologne

Assessment of audiovisual speech recognition in cochlear implant recipients- why and how?

Date: 9 June 2017, 14:00
Location: P3.270 (near KNO Department)

Broadcasting link: https://tinyurl.com/09-06-2017-Auditory-Seminar

Prof. dr. Hartmut Meister
Head Audiology Research
Jean Uhrmacher Institute for Clinical ENT-Research
University of Cologne

In their early days, cochlear implants (CI) served as aids for lip-reading. Due to technical and medical progress and the development of elaborated rehabilitation programs many CI users show near perfect speech understanding without visual cues these days. Nevertheless, audiovisual (AV) speech is still important since visual cues are generally helpful in every-day communication. Thus, assessing different CI-processing schemes or fittings using AV speech reveals high ecological validity. Moreover, CI recipients typically show better lip-reading abilities than their normal-hearing peers and AV integration might be different in these populations.

However, assessing AV speech recognition is not a simple matter since validated speech material is scarce and establishing an AV speech corpus is costly and time-consuming. An alternative approach is using common speech-audiometric material and supplementing the visual modality by applying an avatar.

I will give an overview of our experience with the use of avatars during AV speech assessment, discuss opportunities and limitations and give examples for the implementation in cochlear implant research.