In this post: What Does Acoustic Accessibility Look Like and The Impact of Auditory Verbal Therapy on Literacy Skills.
What Does Acoustic Accessibility Look Like?
Carol Flexer, Ph.D., LSLS Cert. AVT, The University of Akron, Kent, OH
Jane Madell, Ph.D., LSLS Cert. AVT, Private Practice
CLICK HERE for presentation handouts
How do we ensure that children with hearing devices have the excellent access to all of the sounds of speech that they will need to learn language and speech through listening and to thrive in mainstream environments?
New brain research shows that the auditory tissue is the dominant component of the cortex and that the earlier it is stimulated, the greater the potential for its development. This is because the brain is more plastic earlier in life, and earlier stimulation provides time for more repetition and practice, which the brain needs to form and cement those connections. Dr. Flexer noted that, “The brain is a probability organizer.” The type of input it receives determines the kinds of connections it makes. The human ear begins to function at 20 weeks gestation in utero. Our children with hearing loss are born already months behind their hearing peers – we have no time to waste! Remember, “If eyes are open, technology should be on!”
Appropriate amplification plus acoustic accessibility + enriched environment = auditory brain development
After establishing this background information, Madell and Flexer discussed the factors that affect acoustic accessibility and ways that parents and professionals can overcome them. The noted that if the child is not progressing as expected – and we should all have very high expectations – suspect technology issues first. Today’s technology is fantastic, but too often it is not programmed to the fullest extent of its capabilities. Children need access to sound across the frequency range, and high frequencies like 6000 and 8000 Hz should be tested to ensure that the child has access to high pitched sounds like /s/ that are so important for understanding language (think about learning language without auditory access plurals, possessives, or first person singular markers!). Children also must be able to hear at soft levels (15-20dB) across frequencies. Thresholds any higher than that (in the middle of the speech banana) mean too many speech sounds are not audible to the child – and that is in the BEST acoustic environment. However, we must also be cautious not to overprogram children. Aided thresholds less than 15dB often lead to great distortion of the speech signal. Aim for aided thresholds 15-20dB from 250-8000Hz. Additionally, technology should be distortion-free and balanced between ears.
Acoustic accessibility leads to speech intelligibility. In simpler terms – if you can hear it well, you can learn to say it well. Every speech sound needs to be audible at conversational and soft levels and over a distance. We often think about the Ling Six Sounds and the Speech Banana, but there are 44 phonemes (speech sounds) in English, and hearing in the middle of the speech banana is just not good enough. Instead, we need to think about the “speech string bean” (hearing at the top of the banana 15-20dB aided thresholds across the board) and do more testing to ensure that all phonemes are accessible to the listener. Noise is also a factor in acoustic accessibility, and professionals must counsel families to be aware of the listening environment in their homes. Your child’s listening, speech, and language development are critical. You can use apps (here is a link to one such app) to provide a rough (non-calibrated) measure of noise in the home, and help families realize that the dishwasher, TV, radio, and other sources of noise pollution present a real barrier to acoustic accessibility for new listeners.
Aided speech perception testing is a critical component of ensuring acoustic accessibility. While real ear measures (RECD, Real Ear to Coupler Difference) provide a machine’s verification that equipment is functioning up to the theoretical parameters programmed into the computer, only behavioral speech perception testing provides VALIDATION of how the listener is perceiving and using the sound. Real ear measures don’t show that the information is useful to the child, they just show that the information got somewhere. They are useful tests, but only when coupled with comprehensive aided speech perception testing. So what kind of speech perception scores do we want to see? Well, as Dr. Madell said, “If it’s not a good score on a math test, it’s not a good score on speech perception.” For excellent access to speech, children need to have speech perception scores in the 90-100% range. Anything less and their ability to learn through listening is seriously compromised. If this sounds high to you – you’re right – it is. Because it’s necessary. As the presenters stated at the beginning of this presentation, the technology is great, the ability is there, we are just not programming it to be maximally effective for our children. With appropriate amplification and intervention, these scores are attainable. Go out there and get them!
Drs. Madell and Flexer also presented some fantastic research on the importance of FM use. All learning is a combination of both intrinsic ability (within the child) and extrinsic input (their environment). When new listeners have weak intrinsic capability, we must enrich the extrinsic environment to help them close that gap. Use of FM systems helps children’s intrinsic capability (what they perceive) match more closely to their extrinsic environment (what was said). Any degree of hearing loss weakens the child’s intrinsic capacity unless we provide enriched input. Until your brain has the “program,” you can’t fill in the gaps. The belief that FM technology somehow “weakens” children is not in line with the latest research. The presenters noted that children will have many “real world” opportunities to practice listening in noise, but unless we provide enriched input to grow their auditory brains, when they are in those noisy situations, they will have reduced capability to piece it together based on a strong internal model.
Here is a video by Dr. Madell illustrating some of the points made in this presentation:
The Impact of Auditory Verbal Therapy on Literacy Skills
Stacey Lim, Au.D., CCC-A, Kent State University, Kent, OH
Jocelyn R. Folk, Ph.D., Kent State University
Lynette Kriedler, B.A. Psych, Kent State University
Stephen M. Brusnighan, M.A., Kent State University
CLICK HERE for presentation handouts.
Dr. Folk outlined the components of literacy:
Alphabetic principle: knowledge of how the letters in the alphabet represent sounds
Phonological awareness: the ability to detect and manipulate the sounds in spoken words (read more that I have written about phonological awareness HERE)
Phonics: understanding how letters and letter combinations represent sounds in words
Vocabulary: knowing the meanings of words
Syntax: the ways that words combine to form phrases, clauses, and sentences (think grammar)
Comprehension monitoring: readers must monitor their understanding while reading (using skills like rereading, asking for clarification, using context clues, etc.)
Historically, literacy outcomes for children with hearing loss have been quite poor. New research shows that children with hearing loss who communicate using a listening and spoken language approach are achieving much higher levels of reading ability because they have good acoustic access and meaningful language experiences in English (if you’re not hearing and speaking English, it is going to be tremendously difficult to learn to read and write it fluently).
Folk, Lim, and their team wanted to measure reading comprehension abilities in children raised with the Auditory Verbal approach. Their purpose was to, “Discover the cognitive skills AV readers bring to the task of reading and how these skills impact reading comprehension.” Using a variety of speech, language, intelligence, and reading tests, the researchers found that children in the AV group had no significant difference in reading abilities when compared to their typically hearing peers. However, while the average outcomes of the AV group were within normal limits when compared to typical peers, there was much greater variability in reading achievement in the AV group. Upon further examination, Folk and Lim found that several factors differed between high and low achievers in the AV group, notably: vocabulary size, early intervention, and minutes per day of print exposure. While age of intervention cannot be manipulated, parents and professionals do have the power to help children grow their vocabularies, and certainly can work to increase time-on-task when it comes to print exposure. They recommended:
Read, read, read! Every single day, as much as possible.
Teachers designing classroom reading challenges can challenge children to read for X number of minutes instead of X number of books. This encourages children to choose harder books and to value quality over quantity instead of rushing through books that are too easy for them.
Improve distance hearing and access to auditory information. This will help children increase their ability to overhear and pick up on new vocabulary. Remember that this study found vocabulary to be a key predictor of reading abilities.
Expose children with hearing loss to environments that are rich with literacy opportunities. Point out environmental print (words on signs, instructions, etc.), help children learn the fairy tales and stories that are important for inclusion with typical peers, make experience books, and provide children with tools (paper, crayons, etc.) to have early writing experience.
Model good literacy habits early – children need to read with you and to see you reading!