Cognitive Biases That Cost You Strategy Games
Sunk cost, anchoring, recency, status quo — every documented cognitive bias has a counterpart in strategy game play. Here are the ones that lose dots and boxes games most often, and the specific habits that counteract each.
Cognitive biases are the systematic ways human reasoning fails. They have been studied for sixty years in psychology, dressed up in many names — sunk cost, anchoring, status quo, availability, recency, confirmation, framing — and they are the reason rational decision-making is harder than it sounds even when the rules are simple and the goals are clear.
Strategy games are an unusually clean laboratory for cognitive biases. The rules are fixed. The information is almost always complete. There is no luck (or very little). And yet players still make systematically wrong decisions, in patterns that match the biases described in any psychology textbook. The same brain that buys a useless concert ticket because it cost too much to throw away is the brain that takes a chain in full rather than double-cross — and for the same underlying reason.
This post is a tour through the biases that show up most often in dots and boxes and similar grid games, and the specific habits that fight them.
Sunk cost: the bias of caring about past investment
The sunk cost fallacy is the tendency to keep doing something because of effort already spent, even when continuing is no longer the best decision. In strategy games it shows up most often in committed strategies.
You decided ten moves ago to build a long chain in the center. You spent five moves committing to it. Now the position has changed and the long chain is no longer favorable — your opponent has structured the rest of the board so that opening the chain will hurt you, not help. The right move is to cut your losses on the chain and play for a different structure.
The sunk-cost player keeps committing to the chain anyway, because they have already invested moves into it. They reason "I've come this far, I have to make this work." The chain ends up being the chain that loses them the game.
The counter-habit: at every move, evaluate the position from scratch as if you had just walked up to the board. The previous moves are sunk. Your only decision is: given this position, what is the best move? Past commitments are constraints on the present, not justifications for it.
This is hard to do because it requires actively forgetting the narrative you have been telling yourself about your strategy. But the strong players do it; you can see them switch direction in the middle of a game when the position calls for it, and the switch always looks like wisdom in retrospect.
Anchoring: the bias of the first number
Anchoring is the bias where the first piece of information you encounter disproportionately influences your subsequent judgments. In strategy games, the anchor is often your initial assessment of the position.
You looked at the board on move 10 and judged that you were slightly ahead. From then on, every move, your gut tells you "I am slightly ahead." But the position has been changing. By move 25 you might be slightly behind, but you do not see it because the move-10 anchor is still pulling your assessment.
The counter-habit: force yourself to re-anchor at fixed checkpoints. Every 10 moves, formally re-evaluate the position from scratch. Pretend you just walked into the game and someone asked "who is winning?" Answer based on the current board, not on momentum. The re-anchor is uncomfortable because the new assessment often disagrees with what you have been believing, and the disagreement is exactly the value.
A specific application: in dots and boxes, re-count parity at the moment the safe phase ends. Whatever you thought before, the post-safe-phase parity is the one that matters now.
Status quo: the bias of inaction
Status quo bias is the preference for doing nothing or maintaining the current state when the rational choice is to act. In strategy games it appears as the urge to "play it safe" — to make moves that maintain the current position rather than moves that aggressively shift it.
The problem is that "safe" in dots and boxes often means "let the opponent decide." A safe move that does not commit you to anything also does not commit your opponent. Whoever commits first to a structure tends to gain control of it. Status-quo bias keeps you from committing, the opponent commits, and you are now responding to their structure.
The counter-habit: when in doubt, do something. The default for an uncertain move is to take action, not to play passively. Choose the move that pushes the structure in some direction. The wrong direction is fixable later; the absence of direction lets the opponent set the agenda.
This connects to the comeback mindset: when behind, action is mandatory and inaction is fatal. The same logic applies in less extreme positions, just less obviously.
Availability: the bias of vivid memory
Availability bias is judging the probability of something based on how easily examples come to mind. If you recently lost a game to a long-chain opening, every long-chain opening seems threatening for the next twenty games — even if the actual frequency of that pattern is no different than before.
In dots and boxes, availability shows up as overcorrection after losses. You lost a game where you got double-crossed, so for the next several games you are excessively cautious about opening any chain. This caution can cost you games where opening a chain was actually correct, but the recent vivid loss is overweighting your judgment.
The counter-habit: notice when your reasoning includes "last time this happened." If your decision is being driven by a single recent example, force yourself to think about the long-run frequency of the situation rather than the specific recent instance. The vivid memory is not data; it is a single sample.
A useful technique is to wait three games after a vivid loss before making any strategic adjustment. Within three games, the over-weighted memory fades and your judgment returns to its long-run baseline.
Recency: weighting the new too heavily
Closely related to availability, recency bias is overweighting the most recent events relative to older ones. If your last three games went well, you over-estimate your skill. If your last three games went badly, you under-estimate it.
In strategy game terms, recency bias drives some of the worst session-level decision-making. After three wins, you decide to "level up" your stakes or play a stronger opponent — because you feel hot. After three losses, you decide to drop stakes or take a break — because you feel cold. Sometimes these adjustments are correct; usually they are recency talking.
The counter-habit: make session-level decisions based on a longer window. If you are deciding whether to push your rating up, look at the past 20 games, not the past 3. If you are deciding to take a break, ask whether 20-game performance is meaningfully off — three games is variance.
Confirmation: seeing what you expect
Confirmation bias is the tendency to interpret new information as confirming what you already believe. In strategy games it shows up as misreading the board to fit your expected outcome.
You have decided you are winning. You look at the board and the regions that support your winning interpretation get more attention than the regions that contradict it. The contradicting regions get glossed over, and you keep believing you are winning until reality forces you to update — usually three moves too late.
The counter-habit: before any major decision, ask yourself "what would prove I am wrong here?" Look for that specifically. If you are convinced you are winning, find the move sequence that would lose. If you cannot find it, your belief survives the test. If you find it, you have caught the confirmation bias before it cost you.
This is a habit chess players call "checking the resources of the other side." It works in any strategy game and is one of the highest-leverage anti-bias techniques available.
Framing: the bias of how the question is asked
Framing is the bias where the same situation feels different depending on how it is presented. The classic example: people are more likely to choose a treatment described as "90% survive" than one described as "10% die," even though the treatments are identical.
In dots and boxes, framing shows up in how you mentally describe a move. "I sacrifice 2 boxes" feels worse than "I gain control of the rest of the game." Both might describe the same double-cross — but the framing affects whether you do it. The negative frame triggers loss aversion (next bias), while the positive frame doesn't.
The counter-habit: practice describing moves to yourself in both frames. "I'm giving up X to gain Y" and "I'm gaining Y at a cost of X" should feel like the same statement. If they feel different, you are framing your way into a worse decision. The strong player has learned to evaluate moves in their net-effect frame, not in either of the lopsided ones.
Loss aversion: pain of loss vs. pleasure of gain
Loss aversion is the tendency to weight losses about twice as heavily as equivalent gains. Losing 2 boxes feels twice as bad as gaining 2 boxes feels good.
In dots and boxes, loss aversion is what makes the double-cross so hard for new players. You are giving away 2 boxes for sure; the gain (control of remaining chains) is uncertain and abstract. Loss aversion overweights the certain loss and underweights the uncertain gain, even when the math says the trade is overwhelmingly profitable.
The counter-habit: explicitly compute the expected value of the trade rather than relying on gut feel. "I lose 2 here, but on average I gain 6 from the next chain — that's a net +4 trade." Once the numbers are explicit, the math overrides the loss-averse instinct. This requires discipline because the instinct does not stop firing; it keeps shouting "but you're losing 2 boxes!" Train yourself to ignore the shout.
Overconfidence: the bias of certainty
Overconfidence is the systematic over-estimate of your own probability of being right. In strategy games, overconfidence drives over-committing to specific lines and underestimating opponents.
You judge a position as "definitely winning" when it is actually "probably winning" or even "marginally winning." You play the moves that would be correct if winning were certain — for example, taking risks to compound the lead — when the position called for safer play. The overconfidence converts an okay position into a worse one.
The counter-habit: assign explicit probabilities. "I'd say I'm 70% to win this position" is much more useful than "I'm winning." 70% means there is real risk of loss, and the moves should reflect that. Most player instincts substantially overstate certainty; the corrective is to consciously hedge downward.
This is closely related to protecting a lead — overconfidence is what makes leaders blow leads.
A bias-fighting routine
You cannot eliminate biases; the brain runs them by default. What you can do is install a small routine that catches the most common ones at decision points.
A workable pre-move checklist for serious games:
- Re-evaluate the position from scratch. (Anchoring, recency check.)
- What would prove me wrong? (Confirmation check.)
- Is this move based on past investment? (Sunk cost check.)
- Am I framing this correctly? (Framing check.)
- Have I quantified the trade-off? (Loss aversion check.)
You will not run this whole list every move — it is too expensive. Run it on the 3–5 critical moves of the game, the ones that determine the structure or the endgame. For the other 30 moves, your instincts are fine. The biases mostly compound on the critical moves anyway.
The deeper takeaway
Strategy games reveal cognitive biases more clearly than most domains because they have feedback fast enough to notice the consequences. In life, you make a biased decision today and the consequence shows up in months or years. In a 30-minute game, you make a biased decision on move 22 and you lose by move 35. The compression makes the biases visible.
This is part of why strategy games are good training for clearer thinking in general. The same biases that lose dots and boxes games also cost you in negotiation, in product decisions, in personal choices. Catching them at the board is practice for catching them everywhere else. See how grid games train your brain for the broader argument.
Summary
The classic cognitive biases — sunk cost, anchoring, status quo, availability, recency, confirmation, framing, loss aversion, overconfidence — all have direct counterparts in strategy game play. Each one costs games in a specific way, and each has a specific counter-habit. You cannot eliminate the biases, but you can install pre-move checks that catch them on the moves that matter most. Players who do this consistently reach higher rating ceilings than equally talented players who do not, because the biases are leaking points constantly and the leaks compound. Plug them, and the same skill produces better results.