predictive-processingnovel-situationscognitive-bias

Why Your Brain's Prediction Engine Fails in Novel Situations

/ 4 min read / M. Linden

Your brain runs on predictions. Every millisecond, it's generating forecasts about what comes next: the weight of your coffee cup, the sound of your colleague's voice, the feeling of your feet hitting the pavement. This predictive processing works beautifully—until it doesn't.

Close-up of Scrabble tiles spelling 'doubt' on a wooden surface, conveying uncertainty.

When genuinely novel situations arise, our prediction engine stumbles. The reason? It has no reference model to work from.

The Prediction Machine's Dirty Secret

Neuroscientist Andy Clark describes the brain as a "prediction machine" that constantly generates models of reality. These models get updated when predictions fail, creating better forecasts for next time. But there's a catch: the system assumes the future will resemble some combination of past experiences.

Consider what happened during the early days of COVID-19. Epidemiologists had models for pandemic spread, but they struggled with behavioral variables that had never been tested at scale. Would people actually stay home? How would supply chains react to hoarding? The models existed, but the human behavioral components were largely untested hypotheses.

Our individual prediction engines faced the same problem. Your brain had no template for "global pandemic with rolling lockdowns." It kept trying to fit new information into old categories, often poorly.

When Pattern Recognition Becomes Pattern Imposition

The brain's pattern-matching system, usually an asset, becomes a liability in truly novel contexts. We see this clearly in how people initially responded to remote work challenges. Many tried to impose office-meeting patterns onto video calls, complete with the same duration and format. The technology was new; the context was new; but the mental model remained stubbornly old.

This isn't stupidity—it's how prediction works. The brain takes the closest available pattern and applies it, making adjustments as new data arrives. But those initial pattern choices create powerful anchoring effects that can persist long after they've proven inadequate.

graph TD
    A[Novel Situation] --> B{Brain Searches for
Closest Pattern}
    B --> C[Applies Existing Model]
    C --> D[Receives Feedback]
    D --> E{Model Fits Reality?}
    E -->|Yes| F[Refine Model]
    E -->|No| G[Forced Adaptation]
    G --> H[Search for New Pattern]
    F --> C
    H --> B

The Novelty Detection Problem

Here's where things get tricky: our brains are surprisingly bad at recognizing when a situation is genuinely novel versus just unusual. Research by cognitive scientist Douglas Hofstadter shows that we consistently underestimate how different new situations really are from our past experience.

Take the 2008 financial crisis. Many risk models failed not because they were poorly constructed, but because they assumed current conditions fell within the range of historical variation. The models could handle "unusual" market stress—they couldn't handle genuinely novel systemic risks that had no historical precedent.

The same pattern appears in smaller-scale decisions. That new job opportunity seems familiar because you've changed jobs before. But if it involves working in a completely different industry culture, your brain's predictions about "job transitions" may be dangerously incomplete.

Strategies for the Prediction-Poor Environment

When facing potential novelty, successful decision-makers employ several tactics:

Hypothesis Testing Over Confidence Building. Instead of trying to build a complete mental model upfront, they treat initial decisions as experiments designed to generate information.

Reference Class Expansion. They deliberately seek out analogies from distant domains. How did theater companies adapt to radio? What can military logistics teach us about supply chain resilience?

Prediction Intervals Over Point Estimates. Rather than asking "What will happen?", they ask "What's the range of possible outcomes, and how would we respond to each?"

The goal isn't to eliminate prediction—that's impossible. Your brain will keep generating forecasts whether you want it to or not. Instead, the goal is developing better recognition of when those predictions are likely to be wrong, and building decision processes that account for that uncertainty.

When the map runs out, the compass matters more than the destination.

Get Confronting Unknowns in your inbox

New posts delivered directly. No spam.

No spam. Unsubscribe anytime.

Related Reading