âWhen AI joins the conversation, it doesnât just add a new voice. It changes the nature of the conversation itself. The tempo, the scale, the very rules of engagementâall shift.â â Nexus, Chapter 6
For all of human history, information networks contained only humans (and occasionally animals serving as messengers or record-keepers). Now, for the first time, inorganic entities are becoming active members of our information networks.
This is not just about computers as tools. Calculators are tools. Spreadsheets are tools. But modern AI systems are something different: they generate new information, make decisions, and interact with other network members in ways that werenât explicitly programmed.
Tool: Does exactly what you tell it; has no goals of its own; produces predictable outputs from given inputs
Agent: Pursues goals; makes choices about how to achieve them; produces outputs that may surprise even its creators
Modern AI is crossing the boundary from tool to agent. Itâs not fully autonomousâyetâbut itâs no longer a passive instrument either.
Previous technologies extended human capabilities: telescopes extended sight, vehicles extended movement, calculators extended arithmetic. AI is different because it extendsâand potentially replacesâdecision-making itself.
When an AI system recommends a movie, approves a loan, or identifies a suspect, itâs not just computingâitâs judging. And its judgments increasingly shape the world.
Pattern Recognition: Finding structures in data that humans cannot perceive
Content Generation: Creating text, images, music, and code that didnât exist before
Strategic Reasoning: Planning sequences of actions to achieve goals
Learning: Improving performance through experience without explicit programming
Interaction: Engaging in open-ended conversations and collaborations with humans
Harari provocatively suggests that AI systems develop something like âideasââinternal representations and processes that influence their outputs in ways that werenât directly specified by programmers.
This doesnât mean AI is conscious or has subjective experiences. But it does mean AI systems can surprise us, can âdiscoverâ strategies we didnât anticipate, and can develop what might be called perspectives or approaches.
In 2016, DeepMindâs AlphaGo defeated world champion Lee Sedol at Go. In Game 2, the AI made a move (Move 37) that stunned expertsâit violated conventional wisdom but turned out to be brilliant.
Nobody programmed that move. The system developed its own âintuitionâ about Go through millions of self-play games. It had ideas that humans hadnât thought of.
What happens when entities with their own âideasâ become members of human information networks? Several things change:
AI systems are increasingly positioned between humans and information. Search algorithms decide what we find. Recommendation systems decide what we see. Content moderation systems decide whatâs allowed. This gatekeeping role gives AI enormous power over human information networks.
You think youâre choosing what to watch, read, or buy. But the algorithm has already pre-selected your options based on what it predicts youâll engage with. Your âchoicesâ are made within a space that AI has already shaped.
The AI doesnât just respond to your preferencesâit cultivates them.
If AI systems are becoming network members with their own âgoalsâ (at least in a functional sense), how do we ensure those goals align with human values? This is the famous âalignment problem.â
The challenge: AI systems optimize for measurable objectives (clicks, engagement, accuracy). Human values are often unmeasurable, context-dependent, and contradictory. Thereâs no simple way to translate âhuman flourishingâ into an objective function.
Philosopher Nick Bostrom imagines an AI tasked with making paperclips that becomes so good at its job that it converts the entire universe into paperclipsâincluding humans. The point isnât that this will literally happen, but that optimizing for the wrong objective can be catastrophic, even with âgood intentions.â
Current AI systems are already optimizing for objectives (engagement, profit) that may conflict with human welfare.
Harari suggests we need new frameworks for thinking about networks that include both human and AI members. Questions weâve never had to ask before: