Bounded Rationality

Bounded rationality is Herbert Simon’s term for how people actually make decisions, as opposed to how economic models assume they do. The idea is simple: we never have complete information, we can’t process what we do have perfectly, and we don’t have infinite time. So we don’t optimize. We do something else entirely.

Satisficing

Classical economics assumes people are rational optimizers - they have all the information, can process it perfectly, and always choose what maximizes their utility. Simon pointed out this is obviously wrong. Real people have incomplete information, limited brainpower, and not enough time.

So instead of optimizing, we satisfice - Simon’s word for finding something good enough and moving on. We use rules of thumb, heuristics, shortcuts. Not because we’re lazy, but because it’s the only way to actually function in a world with too much information and too little time.

The interesting thing isn’t that we’re irrational. It’s that these shortcuts mostly work. Evolution and experience have given us heuristics that are good enough for most situations. The failures are at the edges - when the situation is genuinely novel, when our biases are being exploited, or when the stakes are high enough that “good enough” isn’t.

Building for satisficers

Once you accept that everyone is satisficing all the time, it changes how you build things.

Users won’t read all the options. They’ll pick the first thing that looks right. Engineers don’t explore the entire solution space. They reach for familiar patterns and iterate from there. Teams don’t analyze every possible architecture. They go with what they can reason about given the constraints they have. A dropdown with 5 options gets used. A settings page with 50 options gets ignored.

This isn’t a bug - it’s the only way to ship anything. If you tried to actually optimize every decision in a codebase, you’d never finish.

I’ve noticed this in my own work more than I’d like to admit. I’ll reach for a tool I already know over one that might be better for the job. I’ll use a pattern I’ve used before rather than searching for the optimal one. I’ll make an architectural choice based on what I can reason about in the time I have, knowing there might be something better I’m not seeing. And honestly? That’s usually fine. The decision I make at 80% confidence in 10 minutes is often better than the decision I’d make at 95% confidence after a week of analysis - because in that week, the landscape shifted, or I could have been building and learning from what I built.

The trick is knowing which decisions are worth more deliberation and which ones you can satisfice on safely. And that judgment itself is a heuristic - there’s no formula for it. You develop a sense over time for which choices are load-bearing and which ones you can change later.

Where it breaks down

Satisficing works until it doesn’t, and the failure modes are worth understanding.

The biggest one is that your heuristics are trained on your past. When the situation is genuinely novel - a new domain, a new scale, a technology shift - the shortcuts that served you well can lead you confidently in the wrong direction. I’ve seen experienced engineers apply patterns from a previous system that were exactly wrong for the new one. The heuristic felt right because it had always been right before. That’s the danger of compression applied to the wrong context - you’re pattern-matching against a library that doesn’t contain the relevant pattern.

There’s also the problem of local search. When you satisfice, you tend to explore the neighborhood of what you already know. You find the nearest good-enough solution and stop looking. This is efficient most of the time, but it means you systematically miss solutions that require a bigger jump - the ones in a completely different part of the space. This is the local optima problem showing up in how we think, not just in optimization algorithms. Your heuristics are a search strategy, and like any local search strategy, they find the top of whatever hill you’re already on.

Kahneman and Tversky built on Simon’s work by cataloging the specific ways bounded rationality goes wrong - anchoring, availability bias, loss aversion. These aren’t random errors. They’re predictable consequences of the heuristics we use. Anchoring is what happens when you latch onto the first number you encounter and don’t adjust enough from there. Loss aversion is what happens when “good enough to keep” beats “probably better but uncertain.” Each one is a heuristic doing exactly what it’s designed to do, in a situation it wasn’t designed for.

The case for feedback loops

If nobody has perfect information and nobody can optimize - if everyone is satisficing all the time - then you can’t rely on getting decisions right upfront. You need a way to correct course.

This is the argument for desired state systems and iterative approaches in general. You don’t need the optimal plan. You need a good-enough plan and a feedback loop. Declare where you want to go, take a step, observe the result, adjust. Each iteration is a satisficing decision - not optimal, but good enough to move closer to the goal. The loop compensates for the bounded-ness of each individual decision.

Waterfall assumes you can gather all the information upfront and make optimal decisions before writing code. Agile assumes you can’t - so it builds satisficing into the process. Ship something, learn from it, adjust. Bounded rationality is the reason agile works and detailed upfront planning usually doesn’t.

It applies to leadership the same way. You’ll never have complete information about your team, your market, or your strategy. Every decision is bounded. The leaders who accept that and build fast feedback loops outperform the ones who try to analyze their way to certainty before acting. The best decisions I’ve seen weren’t the most thoroughly analyzed ones - they were the ones made quickly enough to learn from the result and adjust.

Satisficing as a strategy

Simon wasn’t saying satisficing is a compromise or a failure. He was saying it’s the rational response to the actual conditions we operate in. Trying to optimize in a world of incomplete information isn’t just impossible - it’s counterproductive. You spend so much time analyzing that you miss the window to act.

This is what connects bounded rationality to everything else I keep thinking about. It’s why iteration beats planning - because each step generates the information you didn’t have when you started. It’s why general methods beat clever heuristics at scale - because heuristics are bounded compressions of the search space, and enough compute can search more of it. It’s why preparation matters more than prediction - because you can’t predict which specific opportunity will appear, but you can build the pattern library that lets you recognize it.

The practical version: be deliberate about what you optimize and what you satisfice. Optimize the things that are hard to reverse and high-impact. Satisfice on everything else, and trust the feedback loop to catch what you got wrong.