Common “knowledge” says that hearing happens with the ears and speech happens with the mouth, but this couldn’t be further from the truth. In reality, the loop between our ears, brain, and mouth creates an integrated cycle. We only speak as well as we hear, and we only hear as well as our brain processes sound. So what often seems like a “speech issue” can really be tied to hearing. While it’s crucial to have appropriate Auditory Verbal Therapy services to help your child learn to listen and talk, it’s also important to understand how changes made in the audiology booth* can help to resolve these issues as well.
First, we have to understand how sounds work. When you say /a/, it may sound like “ah” to you — one simple sound. If we analyze it acoustically, however, /a/, like all sounds, is made up of various formants, or bands of speech energy. While a speech sound is perceived as one simple sound to our ears, each one is composed of multiple formants, like various notes on a piano combine to make a chord. These formants are usually abbreviated F1, F2, and F3. In speech, some sounds have overlap between their formants — they’re not exactly the same (or it would just be the same sound), but they share some of the same bands of energy. If a person has limited access to sound (most commonly, they can hear in low frequencies but not high), they may perceive two distinct sounds as being the same.
If a person does not have good access to be able to hear all of these different bands of energy, it’s easy to mishear and perceive sounds incorrectly. A child may produce /i/ (ee) for /s/ because, without access to very high frequency sounds, /i/ and /s/ actually sound the same. To a person with typical hearing, that sounds crazy — those sounds are so different! But it’s true. Without good low-frequency hearing, sounds like /m/ and /u/ (oo) can also be confused. If a child continually leaves out the /f/ sound, is it that she can’t say it yet… or can she not even hear it? Before rushing to diagnose something as a speech issue, we have to ensure good mapping first.
How can we tell the difference between issues of hearing and issues of speech? It may be a mapping issue and not a speech issue if the child does not detect the sound (e.g. cannot turn to it if he hears it) or the child cannot discriminate between the sounds he’s confusing (e.g. can he choose between pictures of “E” and “see” to show he hears the difference between /i/ and /s/).
Some mapping changes that can be made in this case include:
Mapping to the “gold standard” of thresholds 15-20dB across the board through 6000Hz
Changing programming so that the child’s speech discrimination scores are 90% or above with quiet and normal conversational speech and speech in noise
Using an FM system to ensure a better signal to noise ratio (the important stuff, the speaker’s voice, is getting in to the child louder than the noise of the room)
Phonemic mapping: instead of mapping by pitch, verify that each phoneme is audible to the child using live-voice presentation and adjusting the CI based on that data
*Note that many of these suggestions can also be applied to users of digital hearing aids, which can also be programmed to remediate some of these issues. If issues persist because the hearing aid is incapable of getting the listener to the “gold standard,” it’s time to consider moving to CI technology.