Desired State Systems

You set your thermostat to 70°F. You don’t tell it when to turn the heat on, how long to run it, or what to do if someone opens a window. You just say what temperature you want. The thermostat measures the room, compares it to the target, and closes the gap. Then does it again. And again. Your body works the same way - it holds core temperature at 37°C through feedback loops you never think about.

This shape keeps showing up. Declare what should be true, observe what’s actually true, close the gap. Repeat. In software. In biology. In how organizations work. In how you lead people. I keep finding it everywhere I look.

The alternative is imperative - you spell out every step, you own the sequencing, the error handling, all the edge cases. Desired state systems flip that. You own the what, and a reconciler figures out the how. Branislav Jenco has a great writeup framing this as wrapping imperative, mutable systems with a declarative interface. The imperative stuff doesn’t go away - it just gets hidden behind a reconciliation loop.

Declarative thinking

The first step toward desired state systems in software was declarative interfaces - saying what you want instead of how to get it.

make (1976) is an early example. You declare what you want built and what depends on what. make looks at file timestamps and works out what needs rebuilding. You don’t say “compile this, then link that.” You say “I want this binary” and it figures out the steps.

SQL does the same thing for data. You describe the result set you want - you don’t specify which index to use, what join algorithm, what order to scan. The query planner reconciles your declaration with the physical reality of the data. This is why logically equivalent queries can have wildly different performance: same desired state, different reconciliation paths.

But make and SQL are run-once-and-done. You declare, they compute, you get a result. There’s no loop. No continuous reconciliation. If reality drifts after the fact, they don’t notice. They’re declarative, but they’re not desired state systems in the full sense.

Adding the loop

The jump to real desired state systems happened when software started running the reconciliation continuously.

Configuration management was the bridge. Puppet, Chef, and later Terraform - declare what your infrastructure should look like, and the tool checks reality and fixes any drift. Puppet ran on a schedule, putting back config files that someone manually edited. Terraform computes a diff against your cloud account and generates a plan to reconcile. The loop was there, even if you had to trigger it yourself.

Kubernetes (2014) took it to the logical conclusion. Everything is a desired state declaration. Deployments, Services, ConfigMaps - all of it. And the reconciliation is continuous. Controllers - dozens of them, each responsible for one type of resource - run loops all the time. The pseudocode is simple:

while true:
    actual = get_current_state()
    desired = get_desired_state()
    if actual != desired:
        take_action_to_reconcile()
    sleep(interval)

That loop, replicated across dozens of controllers, produces a self-healing distributed system. Drift corrects itself automatically. Nobody has to run anything - the system is always converging.

React (2013) did the same thing for UI. You declare what the screen should look like given the current data. React diffs the desired DOM against the actual DOM and applies the minimal set of mutations. Before React, UI code was imperative: find this element, change its text, add this class, remove that one. You were manually managing the delta. React said: just describe the end state, and every render is a reconciliation pass.

Older than software

The thermostat is the simple version. But the pattern goes back centuries - and biology has been running it for billions of years.

James Watt’s centrifugal governor from 1788 is a mechanical reconciliation loop. Steam engines would speed up or slow down uncontrollably as load changed. Watt attached spinning weights to a throttle valve - as the engine sped up, the weights rose and closed the valve; as it slowed, they dropped and opened it. The desired state was a target speed. The governor closed the gap continuously, with no human in the loop.

Your body does the same thing at a staggering scale. Claude Bernard recognized this in the 1860s - your body holds core temperature, blood pH, glucose levels all within narrow bands through continuous feedback loops. Walter Cannon later named it homeostasis.

It goes deeper than body temperature. Gene regulatory networks maintain target protein concentrations through feedback. The immune system is a reconciliation loop - desired state is “no foreign pathogens,” and it continuously monitors and corrects, with something like eventual consistency (and autoimmune disorders as bugs in the reconciler). Evolution itself fits the shape, though nobody’s declaring the desired state - the environment defines fitness, natural selection reconciles, and organisms that don’t match get selected out.

Cybernetics gave it a name

Norbert Wiener saw the common thread across all of these - mechanical governors, biological homeostasis, control systems - and formalized it in 1948 with cybernetics, from the Greek kybernetes, meaning “steersman.” His insight: you don’t need to predict and plan every step. You just need a goal, a sensor, and a corrective mechanism. The loop handles the rest.

Control theory turned this into math. PID controllers are reconciliation loops with knobs: how hard to push based on the current gap (proportional), how to account for accumulated past error (integral), how to anticipate where things are headed (derivative). Cruise control, autopilots, industrial process control - all desired state systems with the same structure: a setpoint (desired state), a process variable (current state), and an error signal (the gap).

This was Wiener’s whole argument - that feedback-based systems are fundamentally more robust than planned sequences. You don’t need to anticipate every failure or have a perfect model of the world. The loop corrects for disturbances you never predicted. That’s why Watt’s governor worked for steam engines with wildly varying loads, and it’s why Kubernetes can recover from node failures nobody planned for.

Mark Burgess - who built CFEngine, one of the earliest configuration management tools - spent years formalizing why. His promise theory argues that desired state declarations are more reliable than imperative commands specifically because they’re self-correcting. Burgess called it “convergent maintenance” - the system converges toward its goal regardless of where it starts.

Control theory formalized the failure modes too. Long delays between action and effect cause oscillation and overshoot. The Fed is a desired state system - target inflation rate, observe the economy, adjust interest rates to reconcile - but the feedback delays are enormous, which is why monetary policy is so hard to get right. Donella Meadows wrote about this extensively. Delays are poison for any reconciliation system.

Agency

The pattern shows up in how people work together.

The best leaders I’ve worked with operate like desired state systems. They declare intent: here’s what success looks like, here’s the outcome we need, here’s where we’re headed. Then they let the team figure out how to get there. The team becomes the reconciler. They observe reality, identify gaps, and take action. When something doesn’t work, they adjust and try again. The leader doesn’t prescribe every step - they set the target and trust the loop.

The worst leadership I’ve seen is imperative. Do this, then this, then this. Every decision flows through one person. It works when the plan is right and nothing changes, which is almost never. The moment reality deviates from the plan - and it always does - the whole thing stalls waiting for new instructions.

This is exactly the waterfall vs agile split. Waterfall is imperative: plan everything upfront, execute the plan. Agile is desired state: define what success looks like, ship something small, measure the gap between where you are and where you want to be, adjust. The sprint is the reconciliation loop. The retro is the feedback sensor. The whole methodology is built on the assumption that you can’t predict your way to a good outcome - you have to iterate toward it.

What makes people different from Kubernetes controllers is that people have agency. They have their own goals, their own context, their own judgment. A Kubernetes controller can only do what it’s programmed to do when it sees a gap. A person can be creative about it. They can find solutions you’d never have thought to prescribe. The imperative approach wastes that. The desired state approach leans into it - agency isn’t a problem to manage, it’s the thing that makes the reconciler powerful.

This is the same question showing up right now with AI agents. You give an agent a goal - “book me a flight,” “fix this bug,” “answer this question using these tools” - and it observes context, plans actions, and iterates toward the goal. It’s a desired state system where the reconciler has some autonomy. And the central tension is exactly the same one you face leading people: how much do you trust the reconciler? How much autonomy do you give it? Too little and you’re back to imperative - you might as well have written the script yourself. Too much and it goes somewhere you didn’t want. The art is in the calibration.

I’ve seen this play out at every scale. OKRs work when they’re actual desired state declarations - here’s the outcome, figure out how - and fail when they become task lists in disguise. Mission statements work when they genuinely steer decisions and fail when they’re just words on a wall. The difference is always whether there’s a real reconciliation loop: are people actually measuring the gap and adjusting, or are they just executing a plan and hoping?

It applies to yourself too. The goals that have actually changed how I work aren’t the ones where I planned every step. They’re the ones where I got clear on what “good” looks like and then kept adjusting. You define the person you want to be, the skills you want to have, the kind of work you want to do - and then you run the loop. Observe where you are, notice the gap, take a small action to close it. Some days you drift. The loop catches it.

The costs

None of this is free. Anyone who’s run kubectl apply and then stared at kubectl get pods waiting for convergence knows the reality of eventual consistency. You declared your intent. The system accepted it. Now you wait, and hope. Sometimes convergence is fast. Sometimes a pod is stuck in ImagePullBackOff and you’re reading events across three controllers trying to figure out why. There’s no stack trace, no single execution path to follow - just a set of independent loops that each saw a different piece of the problem.

Burgess was honest about this too. He wrote that the hardest part of desired state systems isn’t the declaration or the loop - it’s building a reconciler that correctly observes reality. State observation is where the bugs live. A controller that misreads current state will “fix” things that aren’t broken, or miss things that are. In Kubernetes, the gap between what etcd says and what’s actually running on a node is where entire categories of bugs come from.

These costs aren’t unique to software. Leading with intent means accepting you don’t fully control the path. Teams converge on goals the same way pods do - sometimes fast, sometimes stuck for reasons you can’t see from where you’re standing. The state observation problem is just as real in organizations: the metrics you’re looking at might not reflect what’s actually happening. OKRs can be green while the thing that matters is quietly failing. The Fed can read every economic indicator and still misjudge where the economy actually is. The reconciler is only as good as its view of reality, and reality is always harder to observe than you think.

The through line

Watt’s governor, homeostasis, cybernetics, make, SQL, Kubernetes, React, the immune system, markets, how you lead a team, how you lead yourself. The same shape, discovered and rediscovered independently across centuries and domains.

I think the reason it keeps showing up is that it’s a natural response to uncertainty. Imperative plans assume you can predict what’s going to happen. Desired state systems assume you can’t - and that’s fine, because you don’t need to. You just need a goal and a feedback loop. Whatever went wrong, wherever you are right now, the loop asks the same question: what’s the gap, and what do I do about it?

That works for steam engines and distributed systems. It works for teams and organizations. It works for figuring out what kind of person you want to be. The world is too complex to plan your way through. But you can declare where you want to end up and keep correcting course.