Risk-on / risk-off is a cardboard cutout. We want a five-state market.
Cognitive neuroscience spent two decades carving the brain into functional networks: narrative, salience, control, attention, affect. We think markets alternate between exactly those same modes, and that naming them properly is half the battle.
The idea
Regime-switching models in finance are almost always binary: risk-on vs. risk-off, high vol vs. low vol, crisis vs. calm. This is a vocabulary problem. The actual distinct modes a market goes through are richer than that, and mashing them into one hidden state discards information that would be useful for position sizing and strategy selection.
This scenario is a bet that cognitive architecture — specifically, the large-scale brain networks that neuroscience has converged on over the last twenty years — is a better state-space design language for markets than any of the two-state or three-state hidden Markov models currently in circulation. Not because markets are brains. Because the same kinds of functional modes keep showing up in any self-organizing reflexive system that has to alternate between internal simulation, external update, optimization, and panic.
We are not claiming the market has a salience gland. We’re claiming it acts as if it does, in a way that’s cleaner to classify than “high vol / low vol.”
What we’re not doing
Let us kill the two failure modes up front, because they are the reason this kind of framing usually gets rolled up and thrown in the garbage.
We are not making a neuroscience claim. “DMN” and “SN” are mnemonic labels. If a reader walks away thinking the market “uses” any particular brain network, we’ve written the post badly. The value is in borrowing the ontology, not the ontology being literally true.
We are not painting gorgeous labels on noise. “DMN day” sounds smart whether or not it means anything. The only way this framework earns its keep is if the state labels, inferred from observables, improve a forecasting or allocation task compared to a vol-only baseline. If they don’t, we delete the labels and admit it.
The mapping
Five states, each defined by a mode of functioning rather than a level of volatility.
| Network | Market mode | Observable signature |
|---|---|---|
| Default Mode (DMN) | Narrative, story-driven, internal simulation | Trend persistence, valuation elasticity, thematic concentration, low immediate data-sensitivity, dispersion compressed around narrative winners |
| Salience (SN) | Surprise, regime shift, attention reallocation | Jump intensity, correlation breaks, vol-of-vol spikes, news-volume surges, gap risk, cross-asset repricing |
| Central Executive (CEN) | Deliberate optimization, rule-following | Orderly trend, low dispersion, stable carry, mean-reversion efficiency, low topic entropy, dealer-gamma-dominated tape |
| Dorsal Attention (DAN) | External focus, data reactivity | High post-release assimilation speed, low overnight share of price discovery, macro-surprise-driven intraday moves |
| Limbic / affective | Fear, greed, urgency, pain avoidance | Crash asymmetry, squeeze intensity, breadth collapse, extreme skew, crowdedness stress, forced selling |
The interesting claim is not the steady states. It’s the transitions. A DMN → SN transition is story saturation followed by a catalyst shock. A CEN → limbic transition is an orderly grind snapping into liquidation. If the framework is any good, it’s mostly because it forces you to name the transitions, and some transitions are much more dangerous to specific strategies than others.
How we’d test it
Feature engineering, grouped by family. For each trading day, compute features in each of the five groups above. The DMN family gets narrative concentration, topic persistence, valuation-elasticity proxies, and trend-persistence measures. The SN family gets vol-of-vol, correlation dispersion, jump intensity, and news-volume surprise. And so on. Maybe twelve to twenty features total, cleanly assigned.
Latent state inference. Fit a hidden Markov model (or a switching VAR) with K latent states, where K is chosen by out-of-sample likelihood — not fixed at five. The five-network framing is a prior, not a constraint. If the data wants three states, we take three.
Label the states post-hoc. Look at which feature family is most active in each latent state, and assign a network label. This is the weak step — it’s where “gorgeous labels for noise” can sneak in. We’d pre-register a labeling rule (“state k is labeled SN if its mean SN-family z-score exceeds 1.0 and no other family is higher”) to protect against narrative drift.
Transition matrix estimation. Extract the full state-transition matrix. The most valuable output is not the state at time t; it’s the probability that state t+h differs from state t, and specifically which transitions are historically most destructive to which strategies.
Validation against a vol-only baseline. The test that matters: does this five-state model improve next-period distribution estimates (log-score vs. a two-state vol-regime HMM), or improve Sharpe when used for strategy selection, by a margin that survives bootstrap CIs and out-of-sample replication? If not, it’s decoration.
How it could die
- States collapse into vol and dispersion. This is the modal failure. The HMM decides that five states is overkill and two are fine, and those two turn out to be “high vol” and “low vol” with different outfits. In that case the network labels add nothing and we’d say so.
- Proxies are unstable or redundant. Some features in the DMN family (trend persistence) are highly correlated with features in the CEN family (orderly trend). If the grouping is fake, the states will be fake too. We’d need to show that each family has at least one observable that isn’t shared.
- Transitions are where the signal is — and transitions are rare. Even if the classification is clean, the valuable events (DMN → SN, CEN → limbic) might only happen 5–10 times per decade. That’s a small-sample problem that no amount of clever labeling can fix.
- Label drift makes out-of-sample testing meaningless. Any post-hoc labeling step is vulnerable to the researcher rationalizing a fit. The pre-registration of the labeling rule is the defense, but it’s a fragile one.
- The naming invites ridicule. This is a real cost. Publishing a framework that talks about the market’s “salience network” is going to get us laughed at by exactly the serious people whose attention we’d want. We’re willing to eat this if the validation is clean, and we’re going to walk away if it isn’t.
The real win, if there is one
The real win is probably not a single trading signal. It’s a market state machine for deciding when different signal families deserve trust — a metaframework that tells you when mean-reversion is in season, when carry is compensating for risk, when trend is noise, and when all three should be thrown out because the tape just transitioned to affective-overdrive mode.
Our credit cycle work already gives us one useful regime variable with a ~2-year period. It would layer cleanly on top of a state machine like this: the cycle tells you where the macro clock is, the network states tell you what mode the market is in right now, and the combination is richer than either alone.
That’s the bet. A hidden-state taxonomy with richer structure than binary vol regimes, borrowing vocabulary from cognitive neuroscience without apologizing for it, evaluated by the only criterion that matters: does it measurably improve something downstream?