Totalitarianism: Power to the Algorithm

Part III: Computer Politics | AI-Enabled Authoritarianism

“In the past, the KGB couldn’t follow everyone. They had to choose their targets. AI doesn’t have to choose. It can follow everyone, all the time, automatically. The bottleneck of totalitarianism was human attention. AI removes that bottleneck.” — Nexus, Chapter 10

Beyond Orwell

George Orwell imagined a totalitarian future of telescreens, thought police, and constant surveillance. He couldn’t imagine AI. The surveillance state that AI enables goes far beyond Orwell’s nightmares—not because it’s more brutal, but because it’s more thorough, more automated, and more inescapable.

Harari argues that AI may solve the fundamental problem that limited previous totalitarian regimes: the inability to process enough information to truly control a society.

The Totalitarian Information Problem

Stalin’s Dilemma: To control everything, you need to know everything. But gathering and processing that much information exceeded human capacity.

AI’s Answer: Machine learning can process billions of data points, identify patterns, and flag anomalies—automatically, continuously, at scale.

What was impossible for human bureaucracies is trivial for AI systems.

The Surveillance Architecture

Modern surveillance doesn’t require informants or secret police listening at keyholes. It’s built into the infrastructure of daily life:

The data already exists. AI makes it usable for control.

China’s Social Credit System

China is implementing a “social credit” system that tracks citizen behavior and assigns scores affecting access to jobs, travel, loans, and social services. The system aggregates data from surveillance cameras, financial records, social media, and government databases.

This isn’t science fiction—it’s operational, and it’s being refined with AI to become more comprehensive and automated.

Predictive Control

The most chilling application of AI authoritarianism isn’t punishing dissent—it’s predicting and preventing it before it happens. AI systems can analyze patterns to identify potential troublemakers, nascent movements, or brewing discontent.

This flips the traditional model: instead of reacting to opposition, the state can preemptively neutralize it.

From Reactive to Predictive Control

Traditional: Dissent occurs → State detects → State responds

AI-Enabled: AI predicts dissent risk → State intervenes → Dissent never occurs

If the state can identify and “treat” potential dissidents before they act, organized opposition becomes nearly impossible.

The Automation of Repression

Human agents of repression—secret police, informants, censors—have limits. They require salaries, they can be corrupted, they might have moral qualms. AI has none of these limitations.

Automated systems can:

The Xinjiang Model

Harari examines China’s treatment of the Uyghur population in Xinjiang as a case study of AI-enabled authoritarianism. The region has become a laboratory for surveillance technology:

This represents a new form of totalitarian control—more targeted, more data-driven, more automated than anything before.

Export Model

The surveillance technologies developed in Xinjiang are being exported. Chinese companies sell facial recognition, smart city infrastructure, and monitoring systems to governments around the world—from democracies to dictatorships.

The tools of AI authoritarianism are becoming globally available.

Can Totalitarianism Be Efficient?

Harari asks a disturbing question: Could AI-enabled totalitarianism actually work? Previous totalitarian states failed partly because centralized control couldn’t process enough information. If AI solves the information problem, might such regimes become stable?

He remains skeptical. Even with perfect surveillance, totalitarian systems still face the self-correction problem from Chapter 4. If no one can tell the leader they’re wrong, errors accumulate. AI might improve surveillance without improving decision-making.

The Digital Panopticon

Philosopher Jeremy Bentham imagined a “panopticon”—a prison where guards could see all prisoners but prisoners couldn’t see the guards. The possibility of being watched would produce self-discipline.

AI surveillance creates a digital panopticon. Citizens know they might be monitored at any moment. The result is anticipatory self-censorship—people police themselves, conforming to expected behavior without explicit commands.

The Democratic Vulnerability

Democracies are not immune. The same surveillance technologies available to authoritarian states are available to democratic ones. The difference is supposed to be legal constraints and civil society oversight—but these protections are under pressure.

The post-9/11 expansion of surveillance in democracies shows how quickly civil liberties can erode when security concerns are invoked. AI could accelerate this erosion.

Key Takeaways

← Previous: Chapter 9 Next: Chapter 11 →