z-of-a.

The Saturn signal didn't replicate.

Nobel laureates. 966 of them. Saturn in Taurus at 9.0% vs null base rate 8.6%. The hypothesis is dead — and the pipeline worked exactly as designed.

Two posts ago, we found that Saturn in Taurus is statistically overrepresented among TIME’s 100 Most Influential People. Yesterday morning, we showed it survived a permutation test and wasn’t fully explained by era artifacts. Today we tested it against 966 Nobel Prize laureates.

It’s gone.

Nobel laureates have Saturn in Taurus at 9.0%. The null base rate is 8.6%. Odds ratio: 1.05. Fisher’s exact p-value: 0.35. Permutation p-value: 0.36. The full battery of 401 tests produces zero results surviving correction. Twenty-three raw hits at p < 0.05, which is 5.7% — exactly what chance predicts.

The signal that looked so striking in the TIME 100 — Bonferroni-corrected p of 0.021, permutation p under 0.0001, a within-window enrichment that survived era controls — is absent in an independent dataset nearly ten times larger.

TIME 100Nobel laureates
Subjects100966
Saturn-in-Taurus rate19.0%9.0%
Null base rate6.9%8.6%
Odds ratio3.171.05
Fisher’s p0.0000540.35
Permutation p< 0.00010.36

What happened

The TIME 100 signal was real in the narrow sense. Those 100 people genuinely do cluster in Saturn-in-Taurus birth windows at a rate that can’t be explained by random noise. The permutation test confirmed that. The within-window monthly analysis confirmed it wasn’t just an era-level artifact — subjects landed in the specific months when Saturn was in Taurus.

But “real in this dataset” is not the same as “generalizable.” The pattern was specific to the 100 people that TIME’s editors chose in 1999. A different selection of influential humans — chosen by a different committee, using different criteria, spanning a different mix of disciplines — shows nothing.

The most likely explanation is the one we flagged from the beginning: editorial selection interacting with Saturn’s orbital period. The TIME 100 has a particular aesthetic. It favors transformative figures from the early twentieth century — the modernizers and iconoclasts of the 1880s, the cultural revolutionaries of the 1940s. Eight of the nineteen Saturn-in-Taurus subjects come from a single five-year window: Ataturk, Fleming, Boeing, Picasso, Roosevelt, Joyce, Stravinsky, Goddard, all born 1881–1882. That’s not astrology. That’s an editorial preference for the Gilded Age cohort that happened to overlap with Saturn’s position.

The Nobel dataset doesn’t share this editorial shape. Nobel prizes are awarded annually across six categories, by separate committees, over more than a century. The birth years are more uniformly distributed. There’s no reason for laureates to cluster in any particular 2.5-year window — and they don’t.

What the methodology caught

Here’s what we think is actually interesting about this result, even though the hypothesis failed.

The pipeline worked. Every step did its job:

Flow diagram: univariate scan → Bonferroni + BH correction (survived) → permutation test (survived) → within-window era control (survived) → replication on 966 Nobel laureates (FAILED). Every gate passed except the one that matters most.

Phase 1 found a signal that survived stringent multiple-comparison correction across 393 tests. That’s not easy. The Bonferroni threshold requires a raw p-value under 0.000127 to survive, and Saturn in Taurus cleared it.

The permutation test confirmed the signal wasn’t a statistical artifact of the test structure. Zero out of 10,000 random shuffles reproduced the observed count. That’s a genuine departure from null, not a fluke of multiple testing.

The within-window analysis ruled out the simplest confounder — that subjects were just born in the right decades. They were born in the right months within those decades.

The replication test killed it anyway. A real, corrected, permutation-tested, era-controlled signal that doesn’t replicate in an independent dataset.

This is the entire point of the experimental design. The pipeline is supposed to generate hypotheses that look strong and then subject them to tests they can fail. If the replication step didn’t have teeth, the whole framework would be useless. Saturn in Taurus failing replication is the system working correctly.

What about the Nobel data itself

Nothing survives correction. The top hit is Uranus-Neptune square at raw p = 0.0005, but with a BH-corrected q of 0.19 — well above the 0.05 threshold. This is driven by slow outer-planet cycles and almost certainly an era artifact (Uranus-Neptune aspects last decades). Pluto in Leo, Uranus in Cancer, Neptune in Libra — the next several hits are all outer-planet placements that reflect generational birth-year clustering, not individual-level signal.

The Nobel dataset is cleaner than the TIME 100 in every way: larger, less editorially biased, better documented birth dates. If there were a generalizable astrological signal among influential humans, 966 laureates should be enough to detect it. The fact that nothing survives is informative.

What this means for the GA

The genetic algorithm was designed to search for signal in combinations of astrological features. The premise was that univariate tests might miss complex patterns — Jupiter in Taurus while Saturn squares Mercury while the Moon is in a fire sign. The GA would find these combinations if they existed.

But the GA needs a feature space with signal in it. The Nobel replication tells us the astrological feature space, at least as encoded here (sign placements, aspects, element counts), does not contain detectable signal for human achievement or influence. Running the GA on this feature space would be searching for patterns in noise. Any “discoveries” would be overfitting.

We could still build the GA as an engineering exercise — the island-model architecture on BEAM is interesting regardless of the data. But we won’t claim it’s testing a live hypothesis about astrology. The hypothesis has been tested. The answer is no.

What we got wrong and what we got right

Wrong: we let the TIME 100 result get more interesting than it deserved to be. The within-window analysis was methodologically sound, but we should have been more explicit that n = 26 (subjects in Saturn-Taurus windows) is too small to draw strong conclusions from, regardless of the p-value. Small samples produce dramatic-looking effects that evaporate at scale. That’s exactly what happened.

Wrong: we spent time on the monthly analysis before running the replication. The replication test was always the higher-priority next step. We got it backward because the monthly analysis was more interesting to write about.

Right: we pre-registered the hypothesis before testing it on new data. Saturn in Taurus was specified before we loaded a single Nobel laureate. The test was one-sided, pre-specified, and didn’t require correction. When it returned p = 0.35, there was no ambiguity.

Right: we built the pipeline so that replication was cheap. The same code that processed the TIME 100 processed the Nobel laureates. Different data, same methodology, same feature encoding. Comparing the two results takes one table.

Right: we’re publishing the null result. Failed replications are how science works. If we only published the Saturn-in-Taurus finding and quietly shelved the Nobel result, we’d be doing exactly the kind of selective reporting that makes astrology research unreliable.

Where this leaves JLMoney

The astrological enrichment experiment is complete. The answer is: no detectable signal for astrological features among influential humans, once you test on an independent dataset of sufficient size.

The macro synthesis engine — the genetic algorithm, the island model, the BEAM architecture — is still worth building. The engineering is sound regardless of whether the first dataset panned out. But the next dataset won’t be astrology. It’ll be something with a stronger prior: macroeconomic indicators, sentiment data, volatility regimes. Features where there’s a plausible mechanism, not just a coordinate system.

The methodology

Nobel data source. Nobel Prize API (api.nobelprize.org/2.1/laureates). 1,018 total entries. Excluded 28 organizations, 24 persons with incomplete birth dates (month or day unknown). Final dataset: 966 laureates with verified full birth dates, spanning 1817–1997.

Charts. Swiss Ephemeris via Kerykeion, noon UTC, same as TIME 100 pipeline. Zero computation errors across 966 laureates.

Null distribution. 19,320 stratified null dates (20x scale factor, seed 42). Stratified by month, era (6 bins: 1817–1859, 1860–1889, 1890–1909, 1910–1929, 1930–1949, 1950–1997), and region (North America, Europe, Rest of World). Era bins widened from the TIME 100 analysis to cover the broader Nobel birth-year range.

Feature encoding. Identical to TIME 100: 10 sign placements, 45 aspect classifications, 7 element/modality counts. 401 total tests (vs 393 for TIME 100 — small difference due to different aspect pairs being active).

Pre-registered test. Saturn in Taurus, one-sided Fisher’s exact test, no correction needed (single pre-specified hypothesis). Result: p = 0.35.

Exploratory battery. 401 Fisher’s exact tests with Bonferroni and Benjamini-Hochberg correction. Zero results survive either correction.

Permutation test. 10,000 label shuffles on Saturn in Taurus. Observed count (87) sits within the bulk of the permutation distribution (mean 83.4, std 8.5). Permutation p = 0.36.

← All Lab Notes