AG Bell 2014: Maximizing Brain Adaptability Research Symposium

Maximizing Brain Adaptability: Enhancing Listening for Language Development, Speech Perception, and Music Appreciation

 Beverly Wright, Ph.D., Northwestern University, School of Communication 

Kate Gfeller, Ph.D., University of Iowa, School of Music

Pamela Souza, Ph.D., Northwestern University, School of Communication 

Emily Tobey, Ph.D., University of Texas at Dallas, School of Behavioral and Brain Sciences 


 

Each Convention, AG Bell’s NIH/NIDCD-sponsored Research Symposium gives attendees a window into the latest advances in speech and hearing science from the experts themselves, and this year’s presentation was no different.  For the first time in Convention history, the Research Symposium featured a panel of all female scientists who shared their work relating to the theme of Maximizing Brain Adaptability.  The scientists approach the topic from different angles, giving attendees a multifaceted view of how current scientific discoveries can be applied to better the lives of people with hearing loss.

 

 

Dr. Pamela Souza, herself a family member of people with hearing loss, discussed the topic of improving audibility to improve speech understanding.  Dr. Souza investigates how much speech must be audible for listeners with and without hearing loss to be able to understand spoken messages.  The general rule of thumb is that if a hearing aid user has aided thresholds within/above the “speech banana,” conversational speech should be audible to that person.  However, Dr. Souza pointed out, speaking levels tend to vary +/- 30dB around the average, even within the same conversation.  While adults with typical hearing need 50% audibility of the speech signal to understand 80% of the message, adults with hearing loss require 80% audibility to understand 80% of what is said.  For children with hearing loss, the percent audibility needed is even higher, as children have fewer years of language experience on which to draw to fill in the gaps.

 

 

Souza discussed some of the “enemies” of good audibility and how adaptations in hearing aid fitting and programming can combat them.  In noisy and reverberant environments, various types of masking combine to greatly decrease the audibility of the speech signal.  In noise, energetic masking happens when background noise overlaps in pitch and duration with speech, causing only glimpses of the message to get to the listener.  Informational masking occurs when background noise does not overlap with the speech signal but draws the listener’s attention away and causes distraction.  In rooms with lots of reverberation, or echo, self masking (distortion within the speech sound) and overlap masking (residual energy overlaps following sounds), also decrease the intelligibility of the speech signal.  While younger listeners with typical hearing are able to adapt to reverberation quickly, older listeners and those with hearing loss have a much more difficult time in these harsh listening environments.

 

 

How can audiologists adapt hearing aid fitting to ameliorate these difficulties?  Souza noted that children’s audibility needs are different than adults (a good pediatric audiologist with a knowledge of listening and spoken language is a must!) and that children need greater bandwidth than adults (access to a greater frequency spectrum).  Listeners with typical hearing can hear through 8000Hz, while most hearing aids only amplify through 3000-5000Hz.  This is not enough, especially for children who are learning language and how to produce speech sounds.  Hearing aids that feature frequency gain response (providing more amplification at frequencies where there is greater hearing loss) and directional microphones (that can differentially amplify noise coming from behind vs. speech coming from the front of the listener), or digital noise reduction programs can also be of use.  These adaptations will improve the signal the listener receives through their hearing aids, but environmental modifications (reducing noise), and assistive technology (FM or soundfield systems) can also improve audibility to enable people with hearing loss to hear and understand speech better in a variety of settings.

 

Dr. Beverly Wright presented on the topic of improving auditory skills through training.  She discussed perceptual learning, the learning of skills in basic tasks that may later be translated in to effective interventions for helping people with hearing loss improve their auditory skills.  She began her talk by explaining that the system [hearing] that we think of as fixed [limited capacity], really is not, and that practice and learning can change performance on auditory tasks.  Her research centers around training subjects (in this case, adults with normal hearing thresholds) on basic listening tasks, such as pitch discrimination.  From her studies, Dr. Wright has identified four principles of perceptual learning:

 

  • Just do it.  Learning takes practice.  Training sensitizes the brain to help it determine which information is important to focus on, and which to discard.

  • Practice, practice, practice.  The brain needs a high level of exposure to hit the threshold for learning, mastery, and retention of a new skill.  Practically, this translates into a need to provide children with more talk and language experience to help them hit this threshold.

  • Enough is enough.  Too much training over the threshold does not lead to further learning gains.  More research is needed to determine the optimal level of focused training needed to hit the threshold of a task.

  • Two wrongs make a right.  Taking breaks in learning leads to regression unless those breaks are filled with passive exposure.  The greatest learning occurs when active training is combined with breaks of passive exposure.  In real world terms, this could be translated to mean that children will learn more language when targeted therapy sessions are combined with bathing the child in language all day long.

 

Wright also noted that in her experiments she has contrasted auditory and visual learning, and found that if there is any competition between the two systems (for example, a subject is participating in an auditory learning task but gets visual exposure instead of passive auditory exposure during breaks), the visual system takes over almost immediately.  If there is any competition between the two systems, vision will win.  These findings have important implications for counseling parents on communication mode choice for their children with hearing loss and strongly support an auditory-based approach to the development of spoken language, should that be the parent’s desired outcome for their child.

 

Dr. Emily Tobey gave a historical retrospective on hearing loss and language.  Her presentation included a number of photographs and illustrations spanning Volta’s first experiments in electroacoustic stimulation, to early oral training methods, to the creation of the cochlear implant.  Cochlear implant technology, which once consisted of a computer “processor” that filled the entire wall of a room, is now small enough to be worn behind the ear.  Just as technology has changed, so have outcomes for children with hearing loss learning listening and spoken language.  Dr. Tobey shared the results of studies tracking speech, language, and listening performance of children with hearing loss over time.  Her studies indicated consistently better performance across a variety of measures for children who received cochlear implants at a younger age and who were enrolled in listening and spoken language intervention programs.  Tobey stated that we begin to see physiological changes in speech output within fifteen seconds of removing a cochlear implant.  While it may take longer for these speech changes to become apparent to the human ear, the data provide a strong warning against feeling that “taking a break” from listening has no negative consequences for children who are learning to listen and speak.

 

Dr. Kate Gfeller shared her research on music enjoyment among cochlear implant recipients.  Cochlear implants were originally designed with the goal of accurately conveying the speech signal to people with hearing loss.  Unlike speech, music contains a far greater range of sounds, and while it is the “rule” that one person talks at a time during conversations, during music, there are often dozens of instruments and voices making sounds together.  Because of this, cochlear implant users have historically reported poorer ability to enjoy music than to comprehend speech.

 

 

Dr. Gfeller’s work, however, shows that with practice, CI users can actually improve their ability to listen to music, and the benefits of this experience may extend beyond the simple pleasure of hearing a good song.  Music is a social experience, with the power to evoke deep emotions and connect us to our communities and our world.  For children, music presents new vocabulary, often a slower rate and with multiple repetitions, and enables them to participate in typical early childhood experiences with their peers.

 

 

The presentation included many tips for people with hearing loss on how to get the most out of a music listening experience.  In Dr. Gfeller’s analysis, the most salient parts of music for people with hearing loss are the rhythm and lyrics, so choosing music with little/no backing track (such as acapella), is a good place to start the rehabilitation process.  Environmental modifications, like listening to music in a quiet room, choosing optimal seating at concerts, and assistive technology, like a DAI (direct audio input) cable or even headphones, can help improve sound quality.  Today, the sound quality and performance levels achieved by cochlear implant users are such learning to play an instrument, not just learning to enjoy music, is not out of reach of CI recipients.  Dr. Gfeller recommended instruments with constant tuning, like pianos or percussion, as being easiest for children with CIs to learn to play.

 

 

In Dr. Gfeller’s many interviews on music enjoyment with CI recipients, she has identified a number of factors that contribute to success.  She noted, though, that accuracy does not always equal enjoyment.  Some CI users with good listening accuracy still do not enjoy listening to music, and others with lower accuracy like listening to music a great deal.  Gfeller also noted that in her work she did not see statistically significant differences in music enjoyment between users of different CI brands and different processing strategies.  Gfeller, a music therapist by training, encouraged attendees to consider music therapy intervention for children with hearing loss, but noted that a music therapist with experience in listening and spoken language is a must, and coordination between all members of the child’s team (MT, LSLS, audiologist) will lead to the greatest carryover of goals.  Music enjoyment is within reach of people with cochlear implants.  With practice, the brain is remarkably adaptable!

 

The presentations concluded with a Q and A session from the audience, during which the researchers shared their personal journeys into speech and hearing science, and fielded questions from audience members.  It was a morning filled with learning, and one that left attendees buzzing with ideas of how to implement these ideas in their practice once they returned home from Convention 2014!  Events such as the Research Symposium help us as listening and spoken language professionals stay current with the latest research in the field an fulfill our commitment to bring families the latest and greatest in evidence-based practice.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: