“In complex systems, solutions that seem obvious often fail — and solutions that seem absurd often work.” — Dan Heath
Every upstream intervention is an act of deliberate change in a complex system. And complex systems respond to deliberate change in ways that are difficult to predict, sometimes counterproductive, and occasionally catastrophic.
This is not an argument against upstream thinking — it is an argument for humility, for feedback loops, and for the habit of asking: “If this works exactly as planned, what could go wrong?”
The history of public policy is full of well-intentioned upstream interventions that created new problems as they solved old ones. The key is not to avoid intervention — inaction has its own unintended consequences — but to design for adaptability and to build in the feedback mechanisms that let you detect and correct errors.
In the 1990s, US schools adopted “zero tolerance” policies for weapons and drugs — an upstream intervention designed to prevent violence before it occurred. The intent was unambiguously good: create safety, deter dangerous behavior, send a clear message.
The results were more complex. Zero tolerance policies were implemented with significant racial disparity — Black and Latino students were suspended and expelled at dramatically higher rates than white students for equivalent behavior. Students who were suspended fell behind academically, became more disengaged, and were more likely to drop out. Students who dropped out were more likely to encounter the criminal justice system.
The intervention that was designed to increase school safety created a “school-to-prison pipeline” that damaged the life outcomes of hundreds of thousands of students over decades. This was an upstream intervention with severe downstream consequences.
Heath draws on the systems thinking literature to make a crucial distinction: complicated systems (like a car engine or a flight control system) have many parts, but the relationships between parts are knowable and predictable. With enough expertise, you can diagnose any problem and prescribe a solution. Complex systems (like an ecosystem, an economy, or a school) have emergent properties — the system behaves in ways that cannot be predicted from the behavior of its parts.
In a complicated system, the right expert with the right information can design the right solution in advance. In a complex system, no one is expert enough to predict all the interactions, feedback loops, and emergent behaviors that will follow from an intervention.
This doesn’t mean we shouldn’t intervene — it means we must:
The antidote to unintended consequences is not caution — it’s feedback. Heath argues that every upstream intervention should be designed with explicit mechanisms for detecting unintended harm.
Good feedback loops for upstream programs have three properties:
Speed: The feedback arrives quickly enough to matter. If you discover the intervention is causing harm after two years of implementation, that is much worse than discovering it after two months.
Specificity: The feedback tells you not just that something is wrong, but what is wrong and where. “Outcomes are declining” is not enough. “Dropout rates among the specific subgroup we targeted are rising despite improved attendance” is actionable.
Independence: The feedback comes from sources that aren’t incentivized to hide bad news. Internal program evaluation is better than nothing, but independent evaluation — especially from organizations that don’t benefit from the program’s continuation — is more trustworthy.
A specific category of upstream intervention deserves special attention: algorithmic or data-driven systems designed to predict and prevent problems. These systems can be powerful — predicting which students are at risk of dropping out, which patients are at risk of readmission, which areas are at risk of flooding.
But predictive systems also inherit and amplify the biases in the data they’re trained on, create new forms of stigma for the people they identify, and can produce feedback loops where early identification leads to differential treatment that makes the prediction self-fulfilling.
Think of a program or policy you support that was designed to prevent harm. What would you need to observe to conclude that it was actually causing harm to some people it was meant to help? Are those observation mechanisms currently in place?