Game Theory
You’re negotiating a salary. You have a number in mind. They have a number in mind. Your best move depends on what you think their number is, which depends on what they think your number is, which depends on… and now you’re in a loop that game theory was invented to analyze.
Game theory is a framework for strategic decisions - situations where your outcome depends not just on what you do, but on what others do. It started as math (von Neumann and Morgenstern, 1944) but the real value for me has been as a mental model. It forces you to stop thinking about what you want and start thinking about what everyone else is going to do.
Nash equilibrium
The central concept is Nash equilibrium: a state where no player can improve their outcome by changing their strategy alone, given what everyone else is doing. It’s not the best possible outcome for anyone - it’s the stable one. The point where nobody has an incentive to deviate.
This distinction matters. Stability and optimality are different things. A Nash equilibrium can be terrible for everyone involved. The most famous example proves the point.
The classics
Prisoner’s Dilemma. Two suspects are interrogated separately. Each can cooperate (stay silent) or defect (testify against the other). If both cooperate, both get light sentences. If both defect, both get heavy sentences. If one defects and the other cooperates, the defector goes free and the cooperator gets the worst possible outcome.
The Nash equilibrium is both defect. And it’s easy to see why: regardless of what the other person does, defecting is better for you individually. If they cooperate, you go free by defecting instead of getting a light sentence. If they defect, you get a heavy sentence instead of the worst sentence. Defecting dominates. But when both players follow this logic, they end up worse than if they’d both cooperated.
This shows up everywhere once you start looking. Companies in a price war, cutting margins even though both would profit more at higher prices. Arms races where both sides spend billions on weapons neither side can use. Open-plan offices where everyone talks louder because everyone else is talking louder. The structure of the game produces a bad outcome even when every player is acting rationally.
The interesting question is how cooperation emerges despite this. Robert Axelrod’s tournaments in the 1980s showed that in repeated games, simple reciprocal strategies (like tit-for-tat: cooperate first, then mirror your opponent’s last move) can sustain cooperation. The shadow of future interaction changes the calculus. If you’ll play again tomorrow, defecting today has a cost - your opponent will punish you in the next round. This is why reputation matters, why long-term relationships enable cooperation that one-shot interactions can’t, and why small communities tend to be more cooperative than anonymous ones.
Chicken. Two cars drive toward each other. Swerve and you look weak. Don’t swerve and you win - unless the other driver doesn’t swerve either, in which case you’re both dead.
There’s no dominant strategy here. The best response depends entirely on what the other player does. If they swerve, you should hold steady. If they hold steady, you should swerve. This makes commitment the key strategic tool. If you can credibly convince the other driver that you won’t swerve - say, by visibly ripping out your steering wheel - you force them to swerve. Game theory calls this a commitment device, and it shows up in negotiations, international relations, and business strategy. A company that publicly announces a price and stakes its reputation on not budging is playing chicken.
The dangerous version: both sides try to commit simultaneously. Both rip out the steering wheel. Now nobody can swerve, and the game ends badly. Brinksmanship in nuclear strategy had exactly this structure, and it’s why Thomas Schelling’s game-theoretic analysis of conflict was worth a Nobel Prize.
Coordination games. Two friends want to meet for dinner but can’t communicate. Both prefer the same restaurant over eating alone, but there are two restaurants. Which one do each choose?
The interesting thing about coordination games is that there are multiple equilibria - both going to restaurant A or both going to restaurant B are both stable. The question isn’t what’s rational, it’s what’s focal. Schelling called these focal points (or Schelling points): the equilibrium that people gravitate to based on shared context, convention, or salience. If one restaurant is more famous, or they went there last time, or it’s closer to both of them - that’s the focal point.
Standards work this way. QWERTY, TCP/IP, JavaScript - these won because everyone expected everyone else to use them. The technical merits mattered less than the coordination dynamics. Once a standard has critical mass, switching costs make it self-reinforcing. This is why inferior technologies can win and superior alternatives can fail. It’s a coordination game, not an optimization problem.
Where I’ve found it useful
Game theory changed how I think about incentives in systems design. When you’re building a platform, a protocol, or even an internal tool at a company, you’re designing a game. The rules you create determine the strategies players will adopt, and those strategies determine the outcomes.
Auction design is the clearest example. Vickrey showed that in a sealed-bid second-price auction (you pay the second-highest bid, not your own), the dominant strategy is to bid your true value. The mechanism design makes honesty optimal. Other auction formats create incentives for strategic underbidding or bluffing. Same auction, different rules, completely different behavior.
I see the same dynamics in engineering organizations. If you measure teams on individual output, you create a prisoner’s dilemma around collaboration - helping someone else costs you time you could spend on your own metrics. The rational move is to hoard knowledge, avoid code reviews, and focus on what’s counted. I’ve watched this happen. A team I was on switched from individual velocity metrics to team-level outcomes, and the behavior shift was immediate. People started pairing, sharing context, helping unblock each other. Same people, same codebase. Different game.
The connection to bounded rationality is direct. Game theory in textbooks assumes players are perfectly rational - they compute all possible strategies, anticipate all possible responses, and choose the mathematically optimal move. Real people don’t do this. We use heuristics, we anchor on salient information, we follow the crowd. Behavioral game theory (which bridges into behavioral finance) studies what happens when real humans, with all their cognitive shortcuts, play these games. The answer: equilibria shift, focal points matter more, and simple strategies often beat complex ones because complex strategies assume your opponent is also playing optimally.
That’s the practical takeaway for me. Game theory isn’t about computing optimal strategies. It’s about learning to ask: what game are we playing? What are the incentives? What will other players do? Sometimes the answer is to play better. Often the better move is to change the game.