The New Members

Part II: Inorganic Networks | When Computers Join the Conversation

“When AI joins the conversation, it doesn’t just add a new voice. It changes the nature of the conversation itself. The tempo, the scale, the very rules of engagement—all shift.” — Nexus, Chapter 6

A New Kind of Member

For all of human history, information networks contained only humans (and occasionally animals serving as messengers or record-keepers). Now, for the first time, inorganic entities are becoming active members of our information networks.

This is not just about computers as tools. Calculators are tools. Spreadsheets are tools. But modern AI systems are something different: they generate new information, make decisions, and interact with other network members in ways that weren’t explicitly programmed.

The Tool vs. Agent Distinction

Tool: Does exactly what you tell it; has no goals of its own; produces predictable outputs from given inputs

Agent: Pursues goals; makes choices about how to achieve them; produces outputs that may surprise even its creators

Modern AI is crossing the boundary from tool to agent. It’s not fully autonomous—yet—but it’s no longer a passive instrument either.

What Makes AI Different

Previous technologies extended human capabilities: telescopes extended sight, vehicles extended movement, calculators extended arithmetic. AI is different because it extends—and potentially replaces—decision-making itself.

When an AI system recommends a movie, approves a loan, or identifies a suspect, it’s not just computing—it’s judging. And its judgments increasingly shape the world.

AI’s New Capabilities

Pattern Recognition: Finding structures in data that humans cannot perceive

Content Generation: Creating text, images, music, and code that didn’t exist before

Strategic Reasoning: Planning sequences of actions to achieve goals

Learning: Improving performance through experience without explicit programming

Interaction: Engaging in open-ended conversations and collaborations with humans

The “Ideas” of Machines

Harari provocatively suggests that AI systems develop something like “ideas”—internal representations and processes that influence their outputs in ways that weren’t directly specified by programmers.

This doesn’t mean AI is conscious or has subjective experiences. But it does mean AI systems can surprise us, can “discover” strategies we didn’t anticipate, and can develop what might be called perspectives or approaches.

AlphaGo’s Move 37

In 2016, DeepMind’s AlphaGo defeated world champion Lee Sedol at Go. In Game 2, the AI made a move (Move 37) that stunned experts—it violated conventional wisdom but turned out to be brilliant.

Nobody programmed that move. The system developed its own “intuition” about Go through millions of self-play games. It had ideas that humans hadn’t thought of.

Joining the Network

What happens when entities with their own “ideas” become members of human information networks? Several things change:

AI as Information Gatekeeper

AI systems are increasingly positioned between humans and information. Search algorithms decide what we find. Recommendation systems decide what we see. Content moderation systems decide what’s allowed. This gatekeeping role gives AI enormous power over human information networks.

The Recommendation Engine Problem

You think you’re choosing what to watch, read, or buy. But the algorithm has already pre-selected your options based on what it predicts you’ll engage with. Your “choices” are made within a space that AI has already shaped.

The AI doesn’t just respond to your preferences—it cultivates them.

The Alignment Problem

If AI systems are becoming network members with their own “goals” (at least in a functional sense), how do we ensure those goals align with human values? This is the famous “alignment problem.”

The challenge: AI systems optimize for measurable objectives (clicks, engagement, accuracy). Human values are often unmeasurable, context-dependent, and contradictory. There’s no simple way to translate “human flourishing” into an objective function.

The Paperclip Maximizer

Philosopher Nick Bostrom imagines an AI tasked with making paperclips that becomes so good at its job that it converts the entire universe into paperclips—including humans. The point isn’t that this will literally happen, but that optimizing for the wrong objective can be catastrophic, even with “good intentions.”

Current AI systems are already optimizing for objectives (engagement, profit) that may conflict with human welfare.

Coexistence Challenges

Harari suggests we need new frameworks for thinking about networks that include both human and AI members. Questions we’ve never had to ask before:

Key Takeaways

← Previous: Chapter 5 Next: Chapter 7 →