Klein's Recognition-Primed Decision Model and Why Experts Don't Deliberate
Classical decision theory tells a clean story. Identify the problem, generate alternatives, evaluate against criteria, select the optimal choice. Elegant, teachable, and almost completely disconnected from how skilled practitioners actually decide under pressure.

Gary Klein figured this out studying fireground commanders in the 1980s. He expected to find them comparing options during emergencies. Instead, experienced commanders almost never compared options at all. They recognized a familiar pattern, mentally simulated a course of action, and executed. If the simulation revealed problems, they modified or moved to the next typical response. No generating and ranking alternatives.
Klein called this the Recognition-Primed Decision model.
flowchart LR
A["Situation"] --> B["Pattern<br/>Recognition"]
B --> C["Mental<br/>Simulation"]
C -->|"Works"| D["Act"]
C -->|"Problems Found"| E["Modify or<br/>Next Option"]
E --> C
style A fill:#3a7bd5,color:#fff
style B fill:#5b9bd5,color:#fff
style C fill:#f0ad4e,color:#fff
style D fill:#5cb85c,color:#fff
style E fill:#d9534f,color:#fff
How RPD works
The model has three variations scaling with complexity. Simplest case: the expert recognizes the situation as typical, knows what to do, and does it. No deliberation.
Second variation: the situation requires more diagnosis. The expert gathers information to make sense of things before recognizing a pattern. Third variation: the expert recognizes the situation but mentally simulates the action to evaluate it. If the simulation works, they act. If not, they adapt or move to the next option.
The critical insight is that experts do not optimize. They satisfice -- not from laziness, but because experience gives them a pattern library that points to workable solutions fast. In high-stakes, time-pressured environments, deliberating over the theoretically optimal choice often costs more than going with good-enough immediately.
Why this matters beyond firegrounds
Any domain where experienced practitioners routinely outperform novices despite not fully articulating their reasoning is likely RPD territory. Software architecture, medical diagnosis, military tactics, trading, mechanical troubleshooting.
It also explains why decision matrices feel so forced in practice. Those tools assume a compare-and-select process that experts have already moved past. Training wheels for novices who lack the pattern library, often counterproductive for deep domain experience.
The uncomfortable implication
If expertise is largely pattern recognition, it cannot be shortcut. You cannot hand someone a decision framework and expect expert performance. The patterns are built through accumulated experience with feedback. No substitute for the reps.
Organizations obsessed with standardizing decisions need to be careful. Standardization helps when decisions are routine. When situations are novel and time-compressed, put an experienced person in the seat and get out of their way.
Klein's work corrects the fantasy that good decision-making is primarily about analytical rigor. Sometimes it is. But more often, when it counts, it is about having seen enough to recognize what this situation is and acting before the window closes.
Get Confronting Unknowns in your inbox
New posts delivered directly. No spam.
No spam. Unsubscribe anytime.