On Evil — Post 3: Black Swans and Hindsight Bias — When Rarity Fades

“Any assessment we make is subject to alteration—just as we are ourselves.” —Marcus Aurelius, Meditations 5:10



Events That Stop Time

How do we make sense of rare, extreme events that we’ve interpreted as evil? I mean the events that stop time. The ones that make the news anchors go quiet for a moment as they struggle to find words. The ones we reach for “evil” to describe because our sense of structure—our understanding of reality—has been shattered. You know the ones.

That question leads me to Nassim Taleb.


Taleb’s Black Swan

In his own words: “I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability.” He calls them Black Swan events—named after a story worth knowing.

For centuries, European naturalists assumed all swans were white. Every swan anyone had ever seen was white. It wasn’t a guess—it was treated as a law of nature. Then in 1697, Dutch explorers reached Western Australia and found black swans.

That’s Taleb’s point: the rarity wasn’t a property of the swan. It was a property of where people had been looking. Somewhere else on the map, black swans weren’t rare at all.

Three characteristics. Let’s sit with them:

  1. Rarity — They lie outside the realm of regular expectations. Nothing in past data convincingly points to their possibility.
  1. Extreme impact — When they occur, they cause disproportionate and significant consequences—many times larger than typical events.
  1. Retrospective predictability — After the event has happened, people often believe it was obvious or inevitable in hindsight. That’s the cognitive bias called hindsight bias. And it’s the one that matters most for what I’m trying to say here.

When Rarity Fades

As we retroactively deduce cause and effect, we set into motion an ability to understand precisely what the causes were—feeding hindsight bias while simultaneously subtracting from the rarity. The event stops looking rare. It starts looking like the only possible outcome. And in that shift, something gets lost.

The event loses its weight.

Its strangeness. Its claim on our moral attention.

The more we know and understand what happened, the grayer the swan looks. Let’s pull back the tragic curtain and reveal the cause.


We Narrate, Not Predict

In Fooled by Randomness, Taleb says something that reminds me of Hume: we’re wired to narrate, not predict. We do this constantly—with things far smaller than terrorism or tragedy:

Pick up any bestselling book. The back or inside cover is loaded with glowing endorsements. Five stars. Life-changing. Masterpiece. What you don’t see are the negative reviews and the hundreds of books that got the same treatment from the same publishers and simply disappeared. The cover hints at social proof, but it’s more of a highlight reel. That’s survivorship bias—we only see the swans that are in front of us.

Last year, I made a few grand in a single week investing in quantum computing stocks. I was feeling like a lower middle class Bobby Axelrod—but here’s the honest version: my timing was good. The CEO of Nvidia said something that sent the word “quantum” trending, and stocks with that word in their name went three or four times their value. I didn’t know something. I was positioned correctly to catch a break. I got lucky. I could tell you a story about P&L sheets and companies having enough cash to cover their debts that would make it sound like skill. But it wouldn’t be true.

And then there are the traders and firms who are doing exceptionally well—right up until they aren’t. In a bull market, it’s hard to tell skill from tailwind. The ones still afloat convince themselves they’ve figured something out. The ones who sank aren’t around to offer a counterpoint.

They’re too busy swimming back to shore.


We Make the Same Move With People

Once we know the outcome, the comfort of inevitability is easier to live with than the discomfort of genuine uncertainty. This isn’t just relevant to securities trading and book jackets.

When something terrible happens, in the moment it feels like an aberration—a rupture in the natural order from seemingly nowhere. We reach for “evil” because it matches the bewildered feeling. It’s the word we use when we can’t trace the chain.

But then time passes. We learn more. We find out about the childhood, the trauma, the ideology, the circumstance. And the event starts to look less like a rupture and more like an outcome. A predictable one, given everything that preceded it.

Does that make it less terrible? No. The harm is still real. The victims are still real.

But it does make “evil” less useful as a description. Because what we’re actually describing—when we slow down and look—is a chain of events, decisions, and conditions that produced a result we find intolerable. That’s not evil. That’s causality.

Calling something evil is the same move as constructing the hindsight narrative. It gives the inexplicable a name. It makes the uncertainty manageable. And once you’ve named it, you don’t have to keep asking.

But examining it is exactly what we should be doing.

Before we get to the hard examples—and we will get there—I want to slow down and look at the word itself. Because if we’re going to spend this much time interrogating “evil,” we should probably ask: what does the word actually mean? Where did it come from? And why does it feel so much heavier than it has any right to?

That’s next.

Plain truth. Rough edges.


Closing Question

When you reach for “evil,” what are you trying to make permanent?



Discover more from Plain Truth, Rough Edges

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Plain Truth, Rough Edges

Subscribe now to keep reading and get access to the full archive.

Continue reading