This week’s rebroadcast takes us back to 2007, when I was searching for a good term to describe conclusions drawn with insufficient evidence. In it, I discuss the book Black Swan by Nassim Nicholas Taleb, who has become a more polarizing figure in the years since its publication.
Previously, while answering the Grey’s Anatomy question which generated so much talkback, I found myself searching for a specific term I knew had to exist: the human tendency to consider only the samples presented, ignoring other relevant items.
It felt like a fallacy, but it didn’t quite match up to any of the contenders I found online. If you squint really hard, you can make it look like a special case of the Fallacy of (Hasty) Generalization, but that seems a stretch for something which feels fairly commonplace. I ended up coining, “Fallacy of Limited Sampling” — with a mental sticky note to replace it once I found a better term.
To my surprise, I found the one over the middle of the Atlantic, during the 20+ hour flight to Africa: “silent evidence.”
That’s the term Nassim Nicholas Taleb uses to describe this phenomenon in The Black Swan. He introduces it with a story from Cicero:
Diagoras, a nonbeliever in the gods, was shown painted tablets bearing the portraits of some worshippers who prayed, then survived a subsequent shipwreck. The implication was that praying protects you from drowning.
Diagoras asked, “Where are the pictures of those who prayed, then drowned?”
Those “drowned believers” are silent evidence. You don’t take them into account because they can’t speak up for themselves. The cliché is that, “History is written by the winners.” In fact, it’s written by whoever happens to survive.
Following a discussion of the Phoenicians, and how their lack of literature is more likely due to the fragility of their paper rather than a failure of their culture, Taleb urges us to cast our nets widely:
Consider the thousands of writers now completely vanished from consciousness: their record did not enter analyses. We do not see the tons of rejected manuscripts because these have never been published, or the profile of actors who never won an audition — therefore cannot analyze their attributes. To understand successes, the study of traits in failure need to be present. For instance, some traits that seem to explain millionaires, like appetite for risk, only appear because one does not study bankruptcies. If one includes bankrupt people in the sample, then risk-taking would not appear to be a valid factor explaining success.
Taleb calls this overlooked bulk of information “silent evidence.” I assumed that was a term of art, but Googling it now, most of the references point back to Taleb’s book. It’s possible that he is its primary champion. Regardless, I like it, and intend to use it liberally.1
I didn’t mean for this to become a book review, but since I started…
There are many things I liked about The Black Swan. In addition to silent evidence, I found myself nodding my head to his discussion of the confirmation bias (we tend to notice things that fit our theories), Platonicity (confusion of the model with what it’s modeling), and the narrative fallacy — our need to create a story which explains events after they happened, even if the causality is questionable (or impossible). Thus we write history books explaining how World War I started, when if you were reading the newspapers of the time, these “causes” wouldn’t have shown up.
Taleb’s central thesis is that there are unexpected incidents (Black Swans) which have enormous, disproportionate impact on our world: terrorist attacks, bank failures, iPods. By definition, we can’t predict them — which means any prediction about the future at all is extremely dubious. The best we can do is constantly remind ourselves of the limits of our knowledge, and make some contingency for the completely unexpected.2
I’ve always been leery of statements like, “By 2075, the U.S. population will total 1 billion.” Taleb’s book helps justify my frustration at these seeemingly-scientific projections, which discount what we inherently know about the future: that we know much less than we think.
Are you enjoying this newsletter?
📧 Forward it to a friend and suggest they check it out.
🔗 Share a link to this post on social media.
🗣 Have ideas for future topics (or just want to say hello)? Reach out to Chris via email at inneresting@johnaugust.com, Mastodon @ccsont@mastodon.art, or Threads @ccsont@threads.net
2024 update: I do in fact use “silent evidence” a lot. But that’s just me. It hasn’t become a popular term in the last 15 years, as evidenced by the fact this blog post shows up on the first page of Google search results.
Donald Rumsfeld took a lot of flack for his Yogi Berra-like koan about “Unknown Unknowns” at a Defense Department briefing in 2002, which Slate put in verse form. I’m scared to say: he’s actually kind of right. Acknowledging that there are “unknown unknowns” is important.