Truth in Advertising and Statistics
“Proper treatment will cure a cold in seven days, but left to itself a cold will hang on for a week.” So said Henry G. Felsen, “a humorist and no medical authority” cited in Darrell Huff’s delightful book How to Lie with Statistics.
We all encounter such meaningless statistics every day. They can appear as exhortative assertions in English text, or as tantalizing charts, graphs or maps. The problem is, of course, that many of their statistics tout false or misleading conclusions. Some of them are boldfaced lies, no ifs, ands or buts about it.
Consider advertising. Ads today deliver catchy, persuasive messages intended to capture your imagination and wallet. Companies routinely make broad, vague claims about their products, like “nine out of ten customers surveyed prefer Zoid toothpaste.” Such claims are spouted into the ether without reference to who these Zoid customers are, how many of them participated in the study, and whether they were in fact compensated for their opinions. There’s also no reference to the source of the study which could help clear up doubts about the quality, reliability, and truthfulness of its claims.
Then there are those sensationalized “trends”, gleefully reported by the media with colorful charts and graphs. Such slick images often provide eye candy without factual substance. The scales on axes can be expanded, compressed, or constrained to illustrate the claimed trend in a misleading fashion. Insufficient data or visual complexity can confound the assertions. Once again, the source of the information is unlisted, untraceable, and unverifiable.
In cases like this, we must remember that the person responsible for getting that ad or perspective in front of you is trying to sell you a product or a view. The advertiser hopes that their words and pictures will activate the correct neural pathways in your brain to get you to buy, buy, buy or believe, believe, believe.
We now turn to the statistics that appear in academic papers and books. They’re reliable, right? We expect academic authors to subscribe to the standards of their professions when they report their findings. Their published work has been refereed for originality and accuracy, and therefore, we expect it to be credible and correct.
Unfortunately, this is not always the case when it comes to the statistics. Statistics is a tricky field. There are any number of ways that poorly designed experiments can lead to false conclusions in research. A few are listed below.
1. You didn’t develop a rigorous methodology for your statistical experiment. Suppose I want to study whether tortoiseshell housecats are more active than other housecats. There are many factors which could affect the outcome of the experiment that need to be pinned down before I can do a meaningful experiment. For example, what’s the definition of a tortoiseshell cat and a non-tortoiseshell cat? Does it include only domestic shorthairs, or also pedigreed cats? What age groups will be included in the study? And what’s the definition of “active”?
2. You did not sample the subjects of the statistical study randomly (fairly). Suppose in this same study, you gather statistics on 200 tortoiseshell housecats, and 50 non-tortoiseshell housecats. Now the outcome of the experiment will be biased because of the way you sampled the overall population of cats.
3. You let your assumptions about the outcome of a statistical study bias the experiment. Suppose you believe that tortoiseshell housecats are more active than other housecats. You measure tortoiseshell cat activity at 11 pm (when nocturnal animals like cats are active) and non-tortoiseshell cat activity after lunch (when mammals are inclined to nap). Now you’ve biased the outcome because of the time of day at which you conducted the sampling.
What’s a non-statistician to do? Everyone should be skeptical of any statistical claim they read. You should ask yourself: Who is making this claim? What are they claiming? Why are they claiming it? And why now? Are they trying to get me to buy something or believe in something? How are they trying to convince me?
Advertising buzz, institutional glamour, and personal credentials do not guarantee the credibility of a statistic. All statistical claims should be treated as suspect until you confirm that they were developed using a rigorous methodology to reach verifiable statistical conclusions objectively. Such steps protect us from being taken in by untruths, masquerading as “statistics”.