Thinking in Systems
Donella Meadows’ Thinking in Systems gave me a vocabulary for something I’d been noticing for years but couldn’t articulate. Why do some problems resist every obvious fix? Why does hiring more people sometimes make a team slower? Why does the same pattern of failure keep showing up in completely different contexts? The answer, almost always, is that you’re looking at a piece of a system and treating it like the whole thing.
Stocks and flows
A system is a set of things connected in a way that produces behavior over time. The core building blocks are stocks and flows. Stocks are things you can measure at a point in time - water in a tank, money in an account, users on a platform, engineers on a team. Flows are rates of change - water flowing in or out, deposits and withdrawals, signups and churn, hiring and attrition.
I’ve seen this one play out more times than I can count. A team is struggling, shipping is slow, and the immediate response is to hire. But if attrition is high - if people are burning out or leaving - you’re pouring water into a leaky bucket. The stock (team size) won’t grow until you fix the outflow. Worse, new hires create onboarding load on the people who are already stretched thin, which accelerates the outflow. You can’t understand what’s happening by looking at headcount alone. You have to see the flows.
The same thing happens with technical debt. The stock is the codebase’s complexity. The inflow is every shortcut, every “we’ll fix it later.” The outflow is every cleanup, every refactor. Most teams only see the stock - “the code is messy” - without tracking the flows. So they do a big cleanup push, feel good about it, and then watch it degrade again because nobody changed the rate of inflow. The intervention felt productive but didn’t change the system’s behavior at all.
Feedback loops
Reinforcing loops amplify change. Compound interest, viral growth, technical debt accumulating - once they get going, they accelerate. A product gets popular, which attracts more users, which makes it more useful, which attracts more users. These loops can work in your favor or against you.
Balancing loops push toward equilibrium. Thermostats, market prices, your body temperature. They resist change - when something moves away from a target, the loop pushes it back. A desired state system is essentially a balancing loop by design.
Most interesting behavior comes from reinforcing and balancing loops interacting, and this is where the mental model starts earning its keep. A startup grows fast (reinforcing loop) until it hits scaling problems that slow growth (balancing loop). An engineering team ships faster as they build momentum (reinforcing) until technical debt starts dragging them down (balancing). When you’re inside these dynamics, they’re genuinely hard to see. You just feel the symptoms - things that used to be easy are suddenly hard, and nobody can explain why. The system changed its behavior because the dominant loop shifted, but nothing visible happened. No single decision caused it. The structure did.
Delays
The time between action and consequence. This is the one that bites hardest.
You turn up the hot water, it’s still cold, you turn it up more, then it’s scalding. That’s a delay causing overshoot. Now scale it up: a company sees slowing growth, hires aggressively, but new engineers take months to ramp up and in the meantime the added coordination overhead actually slows things further. By the time the new hires are productive, the market has shifted.
Delays are one of the most common sources of bad decisions because they make cause and effect hard to connect. The decision that caused today’s problem was made six months ago. The fix you implement today won’t show results for three months. In the meantime, the pressure to “do something” leads to more intervention, which creates more delayed effects, which makes the system even harder to read.
Monetary policy is the classic example - the Fed adjusts interest rates, but the effect on the real economy takes months to years. By the time they see the result, the situation has already changed. But I’ve seen the same pattern in engineering organizations. You reorganize teams to fix a collaboration problem, but the reorg creates its own disruption that takes months to settle. Before it settles, someone looks at the metrics, sees things got worse, and reorganizes again. The system never gets a chance to respond to the first intervention before the second one lands.
The practical lesson: when you’re operating in a system with long delays, small adjustments and patience beat aggressive intervention almost every time. The hard part is that patience looks like inaction, and inaction is hard to defend.
Leverage points
Places where small changes produce big effects. Meadows has a famous list of these, ranked from least to most powerful.
The weak ones are what most people fight over - adjusting numbers, tweaking parameters. Tax rates, buffer sizes, headcount targets. These matter, but they rarely change behavior fundamentally.
The strong ones are structural: changing the rules of the system, changing the goals, changing the paradigm the system operates under. Switching from imperative to declarative infrastructure doesn’t just change a parameter - it changes the feedback structure. Moving from top-down planning to autonomous teams with clear goals doesn’t just move a number - it rewires how the organization learns.
This is the thing I keep coming back to. When something is broken, the instinct is to adjust a parameter - hire more, spend more, add a process. But the system’s behavior comes from its structure, not its parameters. If the feedback loops and delays are producing the wrong behavior, tweaking numbers won’t fix it. You have to change the structure. And changing structure is uncomfortable because it means admitting the current design is wrong, not just the current settings.
Meadows’ observation is that people intuitively gravitate toward the weak leverage points because they’re easier to see and less threatening to change. The powerful ones require you to rethink assumptions. I think this is why so many organizational problems persist - the structural fix is obvious from the outside but politically impossible from the inside, so people keep adjusting parameters and wondering why nothing changes.
Seeing the system
The real value isn’t any single concept - stocks, flows, loops, delays, leverage. It’s the habit of asking: what’s the system here? Where are the feedback loops? What are the delays? Where’s the leverage?
Once you start asking those questions, problems that seemed intractable look different. Traffic congestion isn’t solved by more roads - that’s a reinforcing loop (more roads, more driving, more congestion). Hiring faster doesn’t fix a retention problem. Throwing more servers at a performance issue doesn’t help if the bottleneck is a database lock. Symptoms aren’t causes, and the most obvious intervention is often the least effective one.
I think the deepest thing Meadows taught me is humility about intervention. Complex systems are going to surprise you. The side effects of your fix might be worse than the original problem. The delay between action and result means you won’t know for a while. And the system will resist your changes in ways you didn’t predict, because the balancing loops you didn’t see will push back. None of that means you shouldn’t act - but it means you should act with the awareness that you’re intervening in something you don’t fully understand, and you should watch carefully for what happens next.