Scattered Thoughts: “That’s Anecdotal!”
12 minutes
…And Other Causal Abuses in Casual Speech
A little logic is a dangerous thing.
“That’s anecdotal,” is one of the most overused contemporary buzz phrases. It seems to pop up a lot in popular discussions of science and economics, especially as they pertain to the current worldwide struggle between liberty and “safety.”
An anecdote is a short episode, event, or personal experience that seems to demonstrate a causal relationship.
In other words, it’s a kind of example.
Like any other example, an anecdote can be useful or not, depending on whether it really indicates a more general state-of-affairs.
By themselves, anecdotes are often insufficient for making a sound generalization.
Fair enough, but they are also data points. Just because a single example isn’t the whole case for X, that doesn’t mean it isn’t relevant to X.
Here are several problems with dismissing an illustrative example by saying, “That’s anecdotal!”
First problem: Anecdotes are Relevant Data=> Any sound generalization is an abstraction from a sufficient number of anecdotes. All of us, in real life, have to make judgments based on our experiences. We regard a person as prudent or wise when he is able to judge successfully, in real time. Since navigating life depends on it, it must be possible for a reasonable person to draw reasonable conclusions from a limited number of particular experiences (anecdotes.)
Therefore, it’s irrational to immediately dismiss a data point, even if it’s a singleton, by saying, “That’s anecdotal.” The objection assumes that a person is being irrational merely for drawing upon his own experience. But it’s only irrational to draw unreasonably broad conclusions from an insufficient number of experiences, or from an insufficiently pertinent example.
The buzz phrase “That’s anecdotal!” is often used as if it meant the same thing as “That’s a fallacy.”
The actual fallacy in question would be post hoc, ergo propter hoc. “After this, therefore because of this.”
Example: “I ate a jelly donut, and afterwards became violently ill. Therefore, jelly donuts necessarily make me ill.”
A person who says, “That’s anecdotal,” to every example that runs contrary to his preferred narrative means to say, “Correlation does not equal causation.”
Actually, what he should be meaning to say is, “Correlation does not, by itself, equal causation,” because, obviously, it sometimes does indicate causation.
Back to the jelly donut: It might have made me ill, or I might have already been ill before eating it. Perhaps just that particular jelly donut made me ill, or perhaps only jelly donuts with that specific filling, or from that particular Dunkin Donuts, or from all Dunkin Donuts, or whatever.
On the other hand, if I got sick after eating a jelly donut, it might be because jelly donuts actually do make me violently ill.
It wouldn’t be reasonable to conclude with dogmatic certainty that jelly donuts make me ill based upon one case. However, given how sick I got (in this hypothetical): 1) It would be reasonable to be a little more cautious before my next jelly donut, and 2) It would be extremely reasonable to conclude that jelly donuts make me violently ill if, after eating several more jelly donuts from several different donut stores I became violently ill every single time.
There is a point where you have eaten enough jelly donuts from enough different places that you can safely and reasonably conclude that you should avoid jelly donuts. You don’t need a clinical study with statistical analysis.
Also, there is a point well before this point of certainty where you can begin to exercise some caution toward jelly donuts, even based on a single experience, provided that experience was sufficiently untainted by other causes (e.g. you were not at all sick before you ate it, and you’ve had plenty of other donuts from that shop that never made you ill.)
Second problem: One Case Can Be Enough=> There are situations where even a single anecdote makes an almost complete case for a cause-and-effect relationship. For example, if you are healthy, and have never been stung by a bee, and, upon being stung by a bee, you immediately go into anaphylaxis, then you should provisionally conclude that you’re allergic to bees. Similarly, if you are healthy, and are bitten by a certain spider, and immediately experience terrible pain, cramping, and other physical symptoms, then you should provisionally conclude that the spider bite caused the symptoms. These are “anecdotes,” but they are more than sufficient to establish a causal relationship between event A and event B. With absolute certainty? No. But convincingly, such that the conclusion can’t be gainsaid without overwhelming evidence to the contrary? Sure.
In general, any time that there is a sudden, drastic change in circumstances with no visible leadup apart from a single, atypical event, a person ought to conclude (at least provisionally) that the atypical event brought about the change of circumstances.
Bottom Line: Offering an anecdote as evidence is not the same thing as committing the post hoc, ergo propter hoc fallacy. It’s true that anecdotes, wrongly interpreted, provide instances of that fallacy. But then again, anecdotes, rightly interpreted, provide instances of actual cause-and-effect.
Consequently, dismissing someone’s experience – even repeated instances of the same experience in varied circumstances – by saying, “That’s anecdotal!” is not reasonable or just. The person may turn out to be wrong, but he’s not wrong simply for appealing to experience as evidence.
A related “Causal Abuse” from casual conversation (and in popular media/news) is the “That hasn’t been scientifically proven” posture. According to this attitude, a thing cannot be known as true unless there has been detailed scientific or statistical analysis of the same.
In other words, it’s saying that nobody can know anything to be true until that thing’s nature has: 1) been rigorously scientifically tested and confirmed by experts, and/or 2) it at least matches a known pattern of cause and effect so that the experience fits into an existing paradigm.
Let’s break these down:
1) It hasn’t been rigorously tested and confirmed by experts.
This one really burns me, for it amounts to saying that the human race as a whole can never know anything, nor learn anything from its collective experience, without expert approval; and, likewise, that you, as a thinking, acting individual, can never draw any conclusions at all based on your own experiences.
This attitude manifests itself in a continual gainsaying of all tradition and common sense, in the gaslighting of personal experiences, and in the “mythbusting” attitude, which takes delight in showing people that everything they think they know is really baloney. It manifests as an enthusiasm for dismissing all received wisdom, and a cavalier attitude toward norms and general rules that have remained stable over time, and through different cultures. People with this perspective confuse the fallacy of exclusive reliance on tradition (i.e. X is traditional, therefore X is true) with an opposing fallacy, (i.e. Traditions and norms are necessarily the product of prejudice and irrationality.)
A mundane example of this would be what you might call the “Anti-Chicken Soup” posture.
Suppose that mothers have found, over time and by pooled maternal experience, that eating some food – say chicken soup – promotes recovery from various ailments.
Now the Prove It People (PIPs) assume from the outset that the mothers don’t know what they’re talking about. (I mean, how could they? They’re only onsite caretakers who constantly communicate with each other, and with their own mothers, about what works and doesn’t work.) The PIPs set out to design an experiment to prove the mothers wrong.
Scenario 1: The experiment doesn’t find anything to confirm what the mothers were saying
Given Scenario 1, then the PIPs will announce that the mothers were wrong. But this isn’t a certain conclusion. After all, not finding hard evidence that chicken soup works is not the same thing as finding hard evidence that it doesn’t work. It’s hard to prove a negative. Also, is it likely that the experiment’s design is really comparable to the real life scenario? The experimenters likely didn’t give the testing group chicken soup itself, but rather a pill with the essential nutrients found in chicken soup. But maybe it’s not just nutrients that do the trick. Maybe it’s a combination of the nutrients, and the particular psychological experience of well-being that goes along with eating chicken soup, all nestled up in your bed, which promotes recovery from illness. And even if they gave the testing group actual chicken soup, it might make a difference whether you eat it while tucked up in your bed, all warm and comfortable, or in a sterile laboratory surrounded by creepy, soulless PIPs.Scenario 2: The experiment finds evidence that confirms what the mothers are saying
Every so often, you read an article that breathlessly announces something along the lines of, “Mothers Right About Chicken Soup After All!.” Some study somewhere found that when you give sick people chicken soup (or whatever,) they tend to get better. This is supposed to be newsworthy, as if the fact that scientists studied the matter now makes it true. I say it’s almost irrelevant. As mentioned above, only mothers are likely to actually be in the position to observe what actually happens when you give a kid with a cold or a stomach bug a nice, salty, steaming bowl of chicken broth. And part of the positive effect of chicken soup might come from scientific incalculables already mentioned. If collective maternal experience, pooled and shared over time, confirms that this tends to lead to recovery, then it’s more rational to favor that experience than to favor the more narrow, artificial designs of a laboratory experiment. Sometimes confirmation by the PIPs is irrelevant, and unnecessary. (And imagine what would happen in Scenario 3: Scientists: Science Finds that Chicken Soup Makes Sick Kids Sicker! That would run so contrary to human experience that even the scientists would throw out their own conclusions, suspecting an experimental flaw. In other words, they’d trust their own experience over that data.)Case in point: Sneezing in the sun.
A while back I observed a conversation with two men, one a dad, and the other a doctor. Somehow the subject of sneezing in the sunlight came up. The doctor said that, according to a certain study, sneezing in the sun is a learned behavior, something that has no physical cause. The dad wrinkled his brow, and objected that he’d seen his own babies, only weeks out of the hospital, sneeze when first exposed to direct sunlight. There’s just no way, said the dad, that the babies learned that from their parents. The doctor immediately dismissed the dad’s mere paternal experience, pointing out that science had spoken, without addressing the dad’s counterevidence.
Now I don’t know if the doctor in question was accurately reporting the study in question. He may have remembered it backwards, for all I know. Either way, it makes my point. The father had directly observed his own babies sneezing when suddenly exposed to sunlight. If his memories are accurate, then it’s clearly absurd to attribute this sneezing to some kind of psychosomatic learned behavior. The babies are too young for that. If sneezing in the sun was an observed spontaneous response to a novel circumstance, then the father had good reason for believing what he’d seen with his own eyes, rather than what was claimed by a laboratory study. And it’s actually irrelevant whether the study in question actually confirmed or refuted the claim that the sun can cause us to sneeze. If the former, then who cares? Experience already showed as much. If the latter, then there must be something wrong with the study, if it indeed contradicts experience. Perhaps a future study will revise the findings of the former, at which point we’ll be told that the science – which is always right, true, and reliable – has changed. Either way, direct repeated experience should get the benefit of the doubt.
To make the point about the relevance and occasional preferability of direct experience, consider two thought experiments.
Thought Experiment 1: Lost in the Jungle
Suppose you wake up in the middle of a thick rainforest without any supplies. You have to get out within, let’s say, a week, in order to survive. That will mean living in the jungle for a week, traveling through its thick foliage without getting hopelessly lost, avoiding all hazards, knowing what to eat and where to sleep, and so forth. Now who would you rather have with you as your guide: a) a scientifically uneducated but thoroughly experienced native of said rainforest; one who intimately knows the whole forest, including the local flora and fauna, and the surrounding terrain, or b) some rainforest “expert” from a university, armed with only a map? If you chose b, then please send me the names and addresses of your next of kin. I’ll make sure you get a proper burial (assuming your country isn’t currently locked down from Covid.)Thought Experiment 2: Daycare
Suppose you have to leave your tiny, three-month-old child in the care of another while you go away for a week. Who would you rather it be: a) an experienced mother of three; the kind who brings chicken soup to her sick kid, or b) the dude who designed the experiment that claims the sun doesn’t actually make your kid sneeze? Nuff said.2) That doesn’t match an understood pattern of cause and effect/fit or into an existing paradigm!
It’s important to keep in mind that the human race survived for tens of thousands of years before modern science came onto the scene. During that time, trial and error, careful observation, and pooled human experience served the role currently assumed by “experts.” During the same time, the only paradigms on offer were either mythical, or far less precise than our own. People had to figure out what was healthy and unhealthy to eat and drink without, in many cases, any theoretical understanding of why this or that thing was unhealthy to eat, or with only a mythical explanation of the same. People learned to deal with the practical realities of weight, height, heat, etcetera, with little or no theoretical knowledge of gravity, mass, or physics. People – including aboriginal people – observed, named, and navigated by the stars before they had any idea what stars were. People had to build cities, ships, and farms with only practical knowledge of the same. People who knew nothing of the chemical reasons behind soil exhaustion still knew the soil became exhausted, and discovered fertilization, or invented the three-field system, or moved the crop, all without science as we understand it. People who did not have an accurate theoretical account of the elements figured out what materials worked for building large, complicated structures. People without modern computers, mega-machines, or Newtonian physics designed and built cities, temples, pyramids, and cathedrals.
Of course, the sciences have greatly increased our knowledge, and have made possible devices that would have been impossible to ancient and medieval builders, healers, and farmers. Modern science and technology have also increased our life expectancy, and rescued us from the effects of many common ailments that typically killed off large swaths of pre-scientific peoples.
But, speaking of that, what would you rather have: a) a lab-made vaccine against the bubonic plague, or b) the actual, inherited immunity from the bubonic plague? Personally, I’d rather have both, but give only one choice, I’d easily choose the one that was tried and true, and born out of real life (in other words, b.)
The point of my observations above is not to gainsay the value of well-done science. The point is this: we do have access, through experience – that is, through collected, pooled, and culturally transmitted anecdotes – to tons of practical knowledge. That knowledge, in turn, is real knowledge even if we don’t understand the theory behind it!
And, by the way, all scientific knowledge depends on the efficacy of direct experience, and the general reliability of the senses. Each act of observation is an anecdote.
It is one thing to use theoretical and experimental science to refine and increase existing knowledge (including the bodily knowledge called “immunity”,) and to add to it.
It’s another thing to dismiss and disparage the value of direct human experience of cause-and-effect, simply because an observed cause-and-effect relationship does not match any theoretically known or paradigmatic pattern. That is just irrational.
I don’t have to know what gravity is in order to know that falling off a cliff will kill me.
I don’t have to know what water is composed of in order to know that drinking a lot of water is generally good for me, and that drinking no water is going to kill me.
Meanwhile, there are lots of situations where common sense and common experience are better guides to sanity than the opinions of experts.
Case in point: Covid Craziness
If your common sense and common experience of human nature, human society, and viruses told you that, 1) only very specialized masks would do anything to slow transmission of an extremely contagious virus, 2) only people who are generally unhealthy were, in most cases, going to die from the virus, 3) locking down society would do little to slow the transmission of Covid, or to stop Covid deaths, but would absolutely devastate small businesses, mental health, the happiness of young people, 4) that locking people in their homes would lead to a rise in youth suicide, domestic abuse, and financial ruin for tens of thousands of people, 5) that most normal people would be better off gaining natural immunity to the virus than getting a vaccine, however well-designed, 6) that even vaccinated people would still spread the virus, 7) that forcing children to stay home and spend thousands of hours on a computer screen for a year or more would promote obesity, unhappiness, and stunt their learning, 8) that making children mask and not allowing them to play outside and interact with other people would cause them great psychological harm and would compound the current trends social trends in this direction, 8) that businesses, such as tech giants and pharmaceutical companies, who benefit immensely from the lockdown, would do everything in their power to extend Covid craziness and to silence dissenting voices, no matter how well-qualified, including cancelling and shaming highly-qualified experts on virology and immunology, 9) that a crisis that causes the interests of the modern state, the panic-button media establishment, the tech giants, and companies that deliver to your door, to dovetail and align, would make those same interests highly resistant to any counter narrative, and, 10) that the immense powers assumed by the State under Covid would result in government tracking of individuals, new forms of control in the form of “showing your papers,” to the ostracism and demonization of those who don’t agree, and to an even more politically volatile and fragmented society that we already had under Bush, Obama, and Trump…then, congratulations! Your common sense, based on common experience of what viruses, human nature, and human societies are actually like, has been fully vindicated!
You were right. The “experts,'' at least those who were permitted to voice their opinions without losing their careers, have been proven wrong, over and over again.
But, alas: anecdotally, there’s no reason to think that any of those politicians, bureaucrats, media companies, or tech giants will ever face the consequences of their actions.
Those who allowed only one set of facts or opinions to be heard, and who, consequently, turned Covid from a serious health crisis for some people in some demographics into a world-wide social, economic, and political disaster [whose primary victims have been children and young people (who were barely affected by the virus,) small businesses, extended families, and the very human fabric of interpersonal relationships] will never face the consequences of their actions.
They might be sued, of course, but not much will come of it. If powerful people understand anything, it’s how to mutually protect their asses (and assets.)
People who were forced, for no good reason, to be absent when their mother or father died, or to forgo an important funeral, or to lose out on something they’d worked hard for over a period of years, or who, out of genuine concern, made themselves safety warriors who alienated and angered friends and family, will eventually wake up to the horrifying fact that, in some sense, this was all for nothing.
And, as has been the case over and over again in my lifetime – when our intelligence services lied about the source of the anthrax attacks after 911 (but you forgot about that one!), when Bush lied about Iraq’s weapons of mass destruction, when the NSA lied about illegally spying on American citizens (before they retroactively made it legal,) when the CIA lied about torturing suspected terrorists, when Democrats and virtually all journalists lied about Trump being elected by Russian interference – the lies, lies, and damned lies, will just keep coming.
When will we learn that the politically ambitious and the absurdly wealthy – anyone with the demonic desire to be free of all human limitations (which, as I’ll explore in a future essay, is closely related to the desire to rule the lives of others) – deal in lies? They lie regularly and easily. They have lied to you your whole life, and still, you forget – every few years – and you believe the newest lie.
But hey, maybe this constant lying doesn’t indicate a pattern real, coordinated cause-and-effect.
After all, it’s anecdotal.
© 2022 Joseph Breslin All Rights Reserved