wargame designred teamingdoctrineassumption mappingtabletop exercises

Silent Assumptions: How to Design Wargames That Force Players to Surface Hidden Doctrine

E. Sokolov E. Sokolov
/ / 5 min read

Most wargames fail silently. Not with a dramatic wrong move, but with a quiet consensus β€” players converging on a course of action nobody has examined because nobody had to. The assumption goes unspoken. The game ends. The after-action review is polite. And the dangerous belief that drove every key decision walks out of the room untouched.

Close-up of a detailed model tank on a crafting workspace with paint bottles in the background. Photo by Matias Luge on Pexels.

Call these silent assumptions: the load-bearing beliefs that players treat as background reality rather than as claims that could be wrong. They're not the same as doctrine (doctrine gets written down). Silent assumptions live one level below doctrine β€” in the intuitions players use to interpret doctrine, fill gaps in orders, and decide what's even worth wargaming in the first place.

The RAND Corporation's 1964 Sigma war games on Vietnam offer a useful autopsy. The series ran multiple iterations with senior policymakers and surfaced a great deal of operational complexity. What it couldn't shake loose was the underlying belief, shared by nearly every player across every cell, that North Vietnam's decision calculus resembled a rational-actor model responsive to escalation pressure. That assumption was never placed on the table as a variable. It was the table. And because it was never tested, the games kept producing outputs that reinforced it β€” a classic case of simulation validating its own premises.

So how do you build a game that actually hunts these things?

Step one: make the assumption visible before play begins.

Run a structured pre-game assumption elicitation β€” not a briefing, an interrogation. Ask each cell to write down, on index cards, the five things that would have to be true for their strategy to work. Then collect those cards, read them aloud to the full room, and ask: Does anyone believe any of these are false? You will get silence the first time. Run it anyway. By the third iteration of your game series, players start arriving with their own counter-cards.

This technique borrows loosely from pre-mortem methodology (Klein, 1989), but applies it before the game rather than after β€” which is where it actually does damage.

Step two: assign at least one player the explicit role of assumption auditor.

This person does not command forces, advise leadership, or represent an adversary. Their job is to track every decision made during the game and flag the belief it required. They write these on a visible board β€” not to interrupt play, but to accumulate a record. By the end of the game, you have a map of what the room collectively believed without saying so.

Here's a simple flow for embedding this into your game design:

graph TD
    A[Pre-Game: Elicit Assumptions by Cell] --> B{Any Contested?}
    B -- Yes --> C[Flag as Live Variable]
    B -- No --> D[Mark as Baseline Belief]
    C --> E[Auditor Tracks in Play]
    D --> E
    E --> F[Post-Game: Map Decisions to Assumptions]
    F --> G((Identify Untested Load-Bearing Beliefs))

Step three: stress-test the assumption directly β€” mid-game.

Once the auditor has flagged two or three load-bearing beliefs, inject a move designed to falsify one of them. Not to punish players, but to see what breaks. If your blue cell has been operating on the assumption that a particular ally will hold, inject a credible defection signal at the scenario's midpoint. Watch what happens to the decision calculus. Does the cell adapt? Or do they explain away the signal and proceed?

The explanation is the data. Write it down.

Some game designers resist this kind of injection because it feels artificial β€” we'd never do this in a real exercise. That resistance is itself worth examining. Real adversaries probe assumptions constantly. An exercise that doesn't is modeling a world where adversaries are polite.

The 2018 Johns Hopkins Applied Physics Laboratory work on multi-domain operations wargaming ran into exactly this problem: cells operating under different service-branch doctrine held incompatible assumptions about command authority in contested environments. Neither set of assumptions was wrong on its own terms. They just couldn't coexist. No amount of scenario complexity would have surfaced that conflict β€” it took deliberate assumption comparison across cells during an after-action structured debrief.

Build the comparison into the game itself. Don't wait for the debrief.

One last point, and it's uncomfortable: the most dangerous silent assumptions in any wargame are usually held by the design team. What did you decide wasn't worth modeling? What adversary behavior did you round off as 'implausible'? Those choices are assumptions too β€” and unlike the players', they're invisible to everyone in the room except you.

Red-team your own scenario before you run it. Hand it to someone who wasn't in the design meetings and ask them what the game can't produce. Their answer is where you should start.

Get Uncertainty Game in your inbox

New posts delivered directly. No spam.

No spam. Unsubscribe anytime.

Related Reading