Listening in Noise

Friday, June 22
9 a.m. – Noon

Hearing and listening in noise is often identified as the number one difficulty for people with hearing loss. A great deal of research has been done on this topic by hearing aid manufacturers, audiologists, psychologists, neurobiologists, audio engineers and others. This symposium, presented by top professionals in the field, will focus on noise reduction/cancellation circuitry in hearing aids and cochlear implants. The speakers will discuss the current and future direction of solving the issue of hearing in noise.

Head shot of Andrew OxenhamListening in Noise: Challenges and Opportunities

Andrew J. Oxenham, Ph.D., Moderator and Presenter 
Professor in the Departments of Psychology and Otolaryngology, University of Minnesota and Scientific 
Co-director of the Center for Applied and Translational Sensory Science (CATSS)

Understanding speech in noisy backgrounds is one of the primary challenges facing people with hearing loss. In this presentation I will describe how the normally functioning auditory system is able to solve the cocktail party problem, based on the available acoustic information. I will then review the physiological changes that are associated with hearing loss, and will relate these changes to the perceptual challenges encountered in everyday listening situations. The presentation will conclude with an overview of current and future approaches to solving this critical health issue.

 

Photo of Evelyn Davies-Venn, Ph.D., Au.D.

Understanding Individual Variance in Hearing Aid Outcomes in Quiet and Noisy Environments

Evelyn Davies-Venn, Ph.D., Au.D.
Assistant Professor, Department of Speech and Hearing Sciences, University of Minnesota

Some listeners with hearing loss show low speech recognition scores in spite of using hearing aids that optimizes audibility. Beyond audibility, studies have suggested that differences in amplified speech recognition scores may be explained by listeners’ suprathreshold abilities. This talk will present findings from some of our efforts to assess factors that govern individual variance in speech recognition scores for listeners with hearing loss. The focus of this talk will be on studies that have evaluated the effect of audibility, spectral resolution, working memory and presentation level on explaining variance in speech recognition for listeners with hearing loss. The clinical implications of these findings will also be discussed.challenges encountered in everyday listening situations. The presentation will conclude with an overview of current and future approaches to solving this critical health issue.

Photo of Norman Lee by computer.Looking for Sensory Solutions to Common Hearing Challenges in Non-human Animals

Norman Lee, Ph.D.
Assistant Professor of Biology, St. Olaf College

The sense of hearing in non-human animals may function in acoustic communication between conspecifics for mate pairing and resolving agonistic interactions. Hearing may also allow animals to detect predators and to locate prey. Auditory perception in these behavioral contexts depends on the auditory system to detect, recognize, and localize sounds in natural acoustic environments. However, these environments are often characterized by multiple sound sources contributing to background noise that can interfere with the perception of behavioral salient signals. As the sense of hearing has evolved repeatedly and independently in different animal taxa, there is the potential to discover a diversity of solutions to common sensory problems. Research in understanding auditory perception in non-human animals may provide important insights for the development of hearing technologies, but direct biomimetic applications is not without its challenges. In this presentation, we will explore how an acoustic parasitoid fly has evolved echanically-coupled ears capable of hyperacute directional hearing. The efficacy of these highly-specialized ears, however, is limited to specific signals and noise conditions. Conversely, recent modeling and behavioral studies in Cope’s grey tree frogs suggest that the auditory system can exploit statistical features of background noise for improved perception of salient signals in noise. Sound source segregation in this context likely involves computations by the central nervous system.

Head shot of DeLiang Wang

Towards Solving the Cocktail Party Problem

DeLiang Wang, Ph.D.
Center for Cognitive and Brain Sciences and Department of Computer Science and Engineering, Ohio State University

Speech separation, or the cocktail party problem, has evaded a solution for decades in speech and audio processing. Motivated by auditory perception, Dr. Wang has been advocating a new formulation to this old challenge that estimates an ideal time-frequency mask (binary or ratio). This new formulation has an important implication that the speech separation problem is open to modern machine learning techniques, and deep neural networks (DNNs) are well-suited for this task due to their representational capacity. This section will describe recent algorithms that employ DNNs for supervised speech separation. DNN-based mask estimation elevates speech separation performance to a new level, and produces the first demonstration of substantial speech intelligibility improvements for listeners with hearing loss in background noise. These advances represent major progress towards solving the cocktail party problem.

Robust Speech Processing in Human Auditory Cortex

Nima Mesgarani, Ph.D.
Associate Professor, Electrical Engineering Department at Columbia University

The brain empowers humans with remarkable abilities to navigate their acoustic environment in highly degraded conditions. This seemingly trivial task for normal hearing listeners is extremely  challenging for individuals with auditory pathway disorders, and has proven very difficult to model and implement algorithmically in machines. The result of an interdisciplinary research effort where invasive and non-invasive neural recordings from human auditory cortex are used to determine the representational and computational properties of robust speech processing in the human brain will be presented. These findings show that speech processing in the auditory cortex is dynamic and adaptive. These intrinsic properties allow a listener to filter out irrelevant sound sources, resulting in a reliable and robust means of communication. Furthermore, incorporating the functional properties of neural mechanisms in speech processing models greatly impact the current models of speech perception and at the same time, lead to human-like automatic speech processing technologies.