“There are three kinds of lies: lies, d*mned lies, and statistics.” — Mark Twain
It can certainly seem that way, can’t it? Most people are not given the opportunity to take a rigorous research methods class over the course of their educational careers, and even our high schools rarely teach students the skills they need to be informed consumers of the research and statistics that surround us every day. When confronted with the sea of information on hearing loss, communication modes, and amplification options, what’s a parent of a child with hearing loss to do?
I believe that it is vital for all adults to have at least a small working knowledge of how to interpret research and statistics. If not, you are going to be taken for a ride! Here are some key points to keep in mind:
-
Consider the legitimacy of the source. Did this study come from a peer-reviewed journal? Peer-reviewed journals require an impartial editorial board to review articles before publication. The review board is composed of professionals who are highly qualified in the field, and their job is to inspect articles for any hint of bias or faulty research methods. An article published on a mainstream news source (say CNN or MSNBC) does not undergo the same testing. Often, mainstream news outlets will report on the latest scientific studies from peer-reviewed journals, but they usually provide just a summary that rarely gets all the facts straight.
-
Consider possible bias. I wouldn’t go to a university that promotes ASL for research on cochlear implants, just as I wouldn’t ask a Pepsi manufacturer to give me an honest review of Coca-Cola. I wouldn’t believe a study sponsored by CI Brand X that (surprise, surprise) shows that their CIs clearly beat out those from Brands Y and Z.
-
Consider the sample size. Sample size means the number of participants in the study. If a study of one class of ten children found that the children who wore blue scored better on speech and language measures than children who wore other colors, is that really a significant finding? In general, the more participants in the research, the stronger the findings will be, and the more generalizable they will be to the population at large.
-
Consider the age of the study. A study on cochlear implants from the 1990s was a study done with devices that are dinosaurs compared to today’s technology. Likewise, a study done on listening and spoken language outcomes for children in the days before high-powered digital hearing aids and cochlear implants is like comparing apples to oranges when we think about today’s listening, talking deaf kids. Look for current research using the latest technology.
-
Consider the subjects. Who participated in this study? What were the characteristics of the people who participated? A study that measures the spoken language outcomes of children implanted at four years old or older in manual or total communication classrooms and concludes that CIs are not effective based on those children’s outcomes does not necessarily hold true for a child implanted at 12 months whose family has chosen an auditory verbal program. If you’re looking for information to help you understand your child’s case, look for studies that profile children of similar age and implantation/intervention history.
-
Consider the measures used. How are the researchers measuring what they say they’re measuring? Do the tests they used actually relate the skills being investigated? For example, if the researchers want to measure children’s conversational competence but only give a single-word vocabulary test, does this really tell us so much about how children will fare in real life?
-
Proof. Studies do not conclusively “prove” anything. They either support or do not support the hypothesis. People who claim that one study they’ve found in support of their viewpoint “proves” it is right do not understand research methods.