âThe greatness of science lies not in any particular theory but in the scientific methodâin the willingness to question any theory, including the most cherished, and to change course when evidence demands it.â â Nexus, Chapter 4
Every information network makes errors. This is not a bug that can be fixed with better technology or smarter peopleâitâs an inherent feature of any system that processes information about a complex world.
Harariâs crucial insight: the most dangerous error is believing that your information system is error-free. Throughout history, the claim to infallibility has been the hallmark of destructive ideologies.
Input Errors: Wrong data enters the system (measurement mistakes, lies, biased samples)
Processing Errors: Correct data is analyzed incorrectly (flawed algorithms, human cognitive biases)
Output Errors: Results are misinterpreted or misapplied (even correct conclusions can lead to wrong actions)
Feedback Errors: Systems fail to learn from mistakes (suppression of criticism, confirmation bias)
Why do people keep falling for claims of infallibility? Because uncertainty is uncomfortable. We want to believe that someone knows the truthâwhether itâs religious authorities, scientific experts, political leaders, or algorithms.
This desire for certainty is exploited by ideologues who claim their doctrine is beyond question. Once an information system is considered infallible, it becomes impossible to correct its errorsâwhich then compound and multiply.
In 1870, the Catholic Church declared that the Pope is infallible when speaking on matters of faith and morals. This was not an ancient doctrine but a 19th-century innovation, created precisely when papal authority was being challenged by science and liberalism.
The declaration didnât make the Pope actually infallibleâit just made it impossible to officially acknowledge papal errors.
Harari examines how totalitarian regimesâNazi Germany, Stalinist Russia, Maoist Chinaâall claimed access to infallible truth. The Party was always right. The Leader was never wrong. Anyone who pointed out errors was not a helpful critic but a traitor.
The result? Catastrophic policy failures. Millions died in the Great Leap Forward because no one could tell Mao that his agricultural policies were failing. The Nazi war effort collapsed partly because no one could tell Hitler his military strategy was flawed.
No Error Correction: Without mechanisms to identify and fix mistakes, errors accumulate
Information Suppression: Bad news is punished, so it stops flowing upward
Reality Disconnect: Leaders make decisions based on what they want to hear, not whatâs actually happening
Cascading Failures: Small errors compound into catastrophic ones
What makes a healthy information network? Not the absence of errorsâthatâs impossibleâbut the presence of self-correcting mechanisms. A good network can identify its mistakes and fix them.
This is the genius of science: it doesnât claim to have the truth, but rather a method for getting closer to truth through systematic error correction. Every scientific claim is provisional, subject to revision if new evidence emerges.
Harari applies the same logic to politics. Democracy is not valuable because âthe peopleâ are wiseâthey often arenât. Democracy is valuable because it has built-in error-correction mechanisms:
These mechanisms donât guarantee good decisionsâdemocracies make plenty of mistakes. But they make catastrophic, compounding errors less likely.
Democratic error-correction works only if people are willing to accept correction. When tribes become more important than truth, when âmy sideâ must win at all costs, self-correction breaks down. People reject evidence that contradicts their teamâs position.
This is the danger of extreme polarization: it disables the error-correcting mechanisms that make democracy work.
Harari warns that AI presents new infallibility risks. Algorithms are often presented as neutral, objective, and mathematicalâbeyond human bias. But AI systems can be just as wrong as human systems, while being much harder to question.
Many AI systems are âblack boxesââeven their creators canât fully explain why they produce particular outputs. When an algorithm denies you a loan, flags you as a security risk, or recommends a medical treatment, you often canât know why.
If we canât understand how AI makes decisions, how can we identify and correct its errors?
The practical implication is that healthy information networks require institutional humilityâbuilt-in recognition that the system might be wrong. This means: