Errors: The Fantasy of Infallibility

Part I: Human Networks | Why All Systems Fail

“The greatness of science lies not in any particular theory but in the scientific method—in the willingness to question any theory, including the most cherished, and to change course when evidence demands it.” — Nexus, Chapter 4

The Universal Flaw

Every information network makes errors. This is not a bug that can be fixed with better technology or smarter people—it’s an inherent feature of any system that processes information about a complex world.

Harari’s crucial insight: the most dangerous error is believing that your information system is error-free. Throughout history, the claim to infallibility has been the hallmark of destructive ideologies.

Types of Errors

Input Errors: Wrong data enters the system (measurement mistakes, lies, biased samples)

Processing Errors: Correct data is analyzed incorrectly (flawed algorithms, human cognitive biases)

Output Errors: Results are misinterpreted or misapplied (even correct conclusions can lead to wrong actions)

Feedback Errors: Systems fail to learn from mistakes (suppression of criticism, confirmation bias)

The Infallibility Trap

Why do people keep falling for claims of infallibility? Because uncertainty is uncomfortable. We want to believe that someone knows the truth—whether it’s religious authorities, scientific experts, political leaders, or algorithms.

This desire for certainty is exploited by ideologues who claim their doctrine is beyond question. Once an information system is considered infallible, it becomes impossible to correct its errors—which then compound and multiply.

The Infallible Pope

In 1870, the Catholic Church declared that the Pope is infallible when speaking on matters of faith and morals. This was not an ancient doctrine but a 19th-century innovation, created precisely when papal authority was being challenged by science and liberalism.

The declaration didn’t make the Pope actually infallible—it just made it impossible to officially acknowledge papal errors.

Totalitarianism: Error as System

Harari examines how totalitarian regimes—Nazi Germany, Stalinist Russia, Maoist China—all claimed access to infallible truth. The Party was always right. The Leader was never wrong. Anyone who pointed out errors was not a helpful critic but a traitor.

The result? Catastrophic policy failures. Millions died in the Great Leap Forward because no one could tell Mao that his agricultural policies were failing. The Nazi war effort collapsed partly because no one could tell Hitler his military strategy was flawed.

Why Totalitarianism Fails

No Error Correction: Without mechanisms to identify and fix mistakes, errors accumulate

Information Suppression: Bad news is punished, so it stops flowing upward

Reality Disconnect: Leaders make decisions based on what they want to hear, not what’s actually happening

Cascading Failures: Small errors compound into catastrophic ones

The Self-Correcting Network

What makes a healthy information network? Not the absence of errors—that’s impossible—but the presence of self-correcting mechanisms. A good network can identify its mistakes and fix them.

This is the genius of science: it doesn’t claim to have the truth, but rather a method for getting closer to truth through systematic error correction. Every scientific claim is provisional, subject to revision if new evidence emerges.

Democracy as Error Correction

Harari applies the same logic to politics. Democracy is not valuable because “the people” are wise—they often aren’t. Democracy is valuable because it has built-in error-correction mechanisms:

These mechanisms don’t guarantee good decisions—democracies make plenty of mistakes. But they make catastrophic, compounding errors less likely.

When Self-Correction Fails

Democratic error-correction works only if people are willing to accept correction. When tribes become more important than truth, when “my side” must win at all costs, self-correction breaks down. People reject evidence that contradicts their team’s position.

This is the danger of extreme polarization: it disables the error-correcting mechanisms that make democracy work.

AI and the New Infallibility

Harari warns that AI presents new infallibility risks. Algorithms are often presented as neutral, objective, and mathematical—beyond human bias. But AI systems can be just as wrong as human systems, while being much harder to question.

The Black Box Problem

Many AI systems are “black boxes”—even their creators can’t fully explain why they produce particular outputs. When an algorithm denies you a loan, flags you as a security risk, or recommends a medical treatment, you often can’t know why.

If we can’t understand how AI makes decisions, how can we identify and correct its errors?

Humility as Strategy

The practical implication is that healthy information networks require institutional humility—built-in recognition that the system might be wrong. This means:

Key Takeaways

← Previous: Chapter 3 Next: Chapter 5 →