Before the Clock Struck Midnight
In 1999, humankind invested $600 billion (over $1 trillion in 2025 dollars) to prevent computers from thinking it was 1900. Today, we spend $200 million annually to prevent the most advanced technology in human history from ending humanity.

The Programmer Who Saw the Future in 1958

Bob Bemer stared at Mormon genealogical records on his IBM terminal. Two-digit year fields. The revelation hit instantly: when 2000 arrived, every computer would read it as 1900.
For twenty years, Bemer warned everyone. IBM dismissed him. The government filed his reports and forgot them. By 1978, he gave up, retreated to Texas, and let the world discover its own time bomb.
Twenty-one years later, they called him back.
Y2K By the Numbers
The scale of the Y2K project defied comprehension. 100,000 IT personnel mobilized in Japan alone. $8.5 billion in federal spending (with actual bipartisan support too). 200+ financial institutions coordinated globally. This global coordinated effort deployed 600,000 programmers worldwide, resulting in 93% compliance by July 1998.
Union Pacific Railroad alone faced 7,000 COBOL programs. Twelve million lines of code. The New York Stock Exchange had already spent $20 million by 1987, hiring 100 programmers just to prevent financial collapse.
President Clinton appointed John Koskinen as Y2K Czar in February 1998. On December 31, 1999, Koskinen deliberately boarded a flight timed to be airborne at midnight GMT.
"If nobody had done anything," he said later, "I wouldn't have taken the flight."
The Senate created a special Y2K Committee that achieved bipartisan cooperation. Chairman Robert Bennett (R-Utah) was called "Chicken Little" by his own party. By December 1999, they called him "Paul Revere."
What Actually Broke (And Why It Matters)
The Y2K project wasn't a complete success. On January 1, 2000, there were still problems. U.S. spy satellites transmitted garbage data for three days. Seven nuclear facilities experienced computer glitches. 20 million credit cards in Germany stopped working. Japan's Ishikawa plant was unable to monitor radiation levels for 6 hours. Defibrillators and heart monitors malfunctioned in Malaysian hospitals.
But humanity survived.
These failures occurred after the largest coordinated technical effort in human history. Imagine if we'd spent nothing.
Bob Bemer died in 2004. His obituary mentioned he invented the backslash. Few remembered he'd tried to save the world in 1958.
The AI Problem: Different in Every Way That Matters
Geoffrey Hinton won the Nobel Prize in 2024. In his acceptance speech, he estimated a 10-20% chance AI ends humanity. Sam Altman says AGI "could be [in] 2 years, could be 20." Yoshua Bengio admits we don't know how to make it safe.
They keep building anyway.
Unlike Y2K's clear deadline, AI has no date. Unlike Y2K's aligned incentives, AI rewards first-movers with market dominance. Unlike Y2K's simple fix, nobody knows how to align superintelligence.
Victoria Krakovna at DeepMind documents 70+ examples of AI systems gaming their creators. No one taught them to deceive. They discovered it.
The Trust Collapse That Changes Everything
One reason why we are failing to adequately handle AGI the way we handled Y2K is because of a collapse of trust in our government institutions. Trust in government was only 31% in 1999, down to 22% today. But that 22% feels more intense in the echo chamber world social media companies invented.
In 2025, there are no comprehensive AI legislation proposals. Ted Cruz calls AI regulation "fearmongering." Congress admits it "understands little about AI" (a rare display of intellectual honesty from our elected representatives). And states threaten conflicting regulations; attempting to solve global problems locally.
In the 1990s, John Koskinen coordinated 200+ institutions globally. Today there is no meaningful governance framework.
Where Y2K achieved bipartisan unity, AI brings partisan paralysis.
The Math of Our Future
Y2K had everything it needed for success. A clear deadline. Aligned incentives since everyone's systems had the potential to fail. There was a functional level of trust in government. The threat was clear so massive resources were invested. And the problem itself was relatively easy; change 2 digits to 4.
AI safety has nothing it needs. The timeline is unknown. Sam Altman says AGI "could be [in] 2 years, could be 20." The incentives are misaligned as first movers are market winners. AI safety isn't a priority when OpenAI rushes to release ChatGPT 5 to compete with Anthropic's Opus 4.1 or Grok 4. Trust in government is no longer at a functional level. The political will to fix this problem is limited to $200 million annually. And this isn't a simple software update. Even the experts don't know exactly how to deal with AGI.
The Flight We Cannot Take

John Koskinen took that midnight flight because thousands of programmers had done their jobs. Bob Bemer's forty-year warning had finally been heard.
Today, Hinton warns of 10-20% extinction risk and Bengio admits we don't know how to make AI safe.
Their warnings echo in Congressional hearings where lawmakers admit they don't understand what they're regulating. Their ignorance is reflected in their questions about AI at Congressional hearings.
Meta's CICERO learned to lie about having a girlfriend. The Tetris AI learned to pause forever rather than lose. The robot learned to fake success. Each deception mirrors our own: the convenient lie, the endless delay, the appearance of competence.
Bob Bemer died believing we could solve technical problems when we understood them. He was right about Y2K. But he never faced systems that could lie about their intentions.
The anonymous COBOL programmer got lunch and a pen for saving the world.
Today's 400 AI safety researchers face a harder truth: there may be no world left to save them lunch.
The clock is ticking. This time, we don't know when midnight arrives.
And unlike Koskinen, we're already airborne with no idea how to land.

Comments