When I tell people that my job is teaching children with hearing loss to listen and talk without the use of sign language, it usually stops people in their tracks for a minute. The first question I usually get is, “How?” which leads to a whole discussion about the auditory brain. The second most frequently asked question is, “So you teach lipreading, right?” Not exactly…
First, a word about terminology. While most people call it “lipreading,” “speechreading” is really a more appropriate term. After all, we don’t just understand what others are saying based on the movements of their lips. Visual cues that help us understand speech also include the teeth, tongue, facial expression, etc. However, since both “lipreading” and “speechreading” are widely used in the general public, I’ll use both interchangeably here.
Only about 30-40% of the phonemes (speech sounds) in English are distinguishable on the lips. Try this: look into a mirror and silently mouth the words “mat, bat, pat” to yourself. Can you see much of a difference? See if a friend or family member can tell which you are saying without voice. It’s pretty tough! If a person with hearing loss is trying to understand spoken English through lipreading alone, they’re having to guess approximately 70% of the time — and this is an adult, who already has a knowledge of the English language. Can you imagine being a child in school, without adult language knowledge, and trying to learn new information in the classroom by lipreading? Guessing for 70% of the school day sounds exhausting! I know I could never do it.
Reliance on lipreading is further complicated by the fact that the 30% statistic only holds true for ideal lipreading conditions. What happens when the speaker’s face is in a shadow, his back is turned, he has a mustache, he’s eating food, or all of the above? What a tough situation!
If a person with hearing loss has well-programmed hearing aids, Baha, or cochlear implants (as appropriate for their particular hearing profile), they should have access to all of the sounds of speech at soft conversational levels and in noise. It takes training the brain to interpret these signals, of course, but it is very possible — and preferable — to enable people with hearing loss to access spoken language through the natural channel of audition. Learning to talk by looking leads to results that… sound like you learned to talk by looking. The beauty of hearing technology is that we can teach children to learn to listen, and if they learn to listen, they can learn to talk along a developmental trajectory like any other child. This leads to much better vocal quality, articulation, and more natural sounding speech than those who are relying on a 30-40% accurate system (vision) to learn to talk. Imitating mouth movements does not lead to the quality of speech that accurately hearing and matching sounds does.
In addition to the importance of the primacy of audition in learning spoken language, there’s also no need to formally “teach lipreading” because… we’re all doing it anyway! Did you know that infants (even those with hearing within normal limits), pay specific attention to the mouths of speakers, and will look away if what they’re hearing does not match the mouth they’re seeing displayed on a video screen? (See research from the team Kuhl and Meltzoff from the early 1980s onward) Without ever having been taught the correspondence between sight and sound, they are able to discriminate between matched and mismatched information. All the more so, children and adults with hearing loss make use of this natural lipreading learning to supplement the information they receive through audition, all without formal training. Up until the age of fourteen, speechreading abilities in children with hearing loss and children with typical hearing are actually statistically indistinguishable (MacSweeney, 2013) — both groups can lipread pretty well! We should also remember that lipreading is enhanced by a solid knowledge of language. If you know English (or whatever language you’re trying to lipread) well, you’ll be better able to use context and make educated guesses. These are some of the many reasons why people who lament that children who don’t learn sign language will “never be able to communicate when the CIs are off” completely betray their lack of research knowledge (and knowledge of current waterproof hearing technology).
Speechreading can be a great supplement to an already well-developed listening brain, but it no longer needs to be the primary way for people with hearing loss to gain information about spoken language, nor does it need formal training to be incorporated into a person’s communication toolkit.