Elon Musk and Dario Amodei Just Agreed on AGI. The People Who Govern It Haven't Started Talking.
When the CEO building AGI and the CEO warning about AGI both predict 2026-2027, that's not hype—that's convergence. The timeline debate is over. The preparedness debate hasn't started.
Elon Musk and Dario Amodei don't agree on much. But they just agreed on the most important AI prediction of the decade.
Musk runs xAI. He's trying to build artificial general intelligence—AI that matches or exceeds human intelligence across any task. He told his staff AGI could arrive by 2026, defining it as "smarter than the smartest human."
Amodei runs Anthropic. His company exists to prevent unsafe AGI. In his essay "Machines of Loving Grace," he wrote that powerful AI systems "could come as early as 2026." Last month, Anthropic's policy blog went further: "We expect powerful AI systems will emerge in late 2026 or early 2027."
When the guy trying hardest to build it and the guy trying hardest to make it safe both land on the same year, that's not promotional spin. That's convergence.
The Debate That Just Ended
For years, AGI timelines ranged from "next year" to "never." Researchers argued whether it was five years away, fifty years away, or fundamentally impossible. That debate just collapsed.
The ARC-AGI benchmark tests abstract reasoning—the kind humans do effortlessly but AI systems struggle with. In early 2025, most systems scored under 5%. By January 2026, the top solution hit 54%. That's a tenfold improvement in twelve months.
Humans still score near 100%. But the gap's closing faster than anyone expected.
What Governments Are Actually Doing
Congress just ordered the Pentagon to prepare for AGI. The fiscal 2026 National Defense Authorization Act requires the Department of Defense to establish an "Artificial Intelligence Futures Steering Committee" by April 1, 2026.
That's one month from now.
The committee must submit an AGI preparedness report by January 2027. It'll assess risks, resource needs, and adversary capabilities. For the first time, the US military is treating AGI not as science fiction but as an operational planning problem.
That's the only concrete government action we found. One committee. One report. One deadline in ten months.
The Preparedness Gap
Amodei just published a 38-page essay warning that superhuman AI poses "the single most serious national security threat" and could arrive by 2027. He called the risks "civilization-level."
Musk's been saying versions of this since 2024. He's also racing to build it first.
Both predict 2026 or 2027. Both say the stakes are existential. They disagree violently on strategy—Musk wants to build fast and dominate, Amodei wants guardrails first—but they're not arguing about the clock anymore.
Meanwhile, governments are forming committees. Writing reports. Scheduling meetings. The Pentagon has until April to assemble a steering group. January 2027 to submit findings. By that timeline, AGI could already exist before the report lands.
What Just Changed
The timeline debate consumed years. Researchers, investors, and policymakers argued endlessly about when. That debate's over. The people building AGI and the people trying to prevent catastrophic misuse just agreed: this year or next.
The preparedness debate—what do we actually do when AGI exists—hasn't started. One Pentagon committee isn't a plan. It's an acknowledgment that someone should probably think about making a plan.
When builders and safety advocates stop disagreeing about timing and start disagreeing only about response, that's the moment everything changes. We just hit that moment. And the institutions that govern technology haven't caught up yet.
Keep Reading
The Godfather of Neural Networks Just Bet His Career That ChatGPT Is a Dead End
Yann LeCun left Meta to launch AMI Labs — building AI that understands physics instead of just predicting words. If he's right, the trillion-dollar LLM industry is doomed. If he's wrong, he just walked away from the best-funded AI lab in the world for nothing.
AI Is Eating Itself — And Your Next Laptop Will Cost 20% More Because of It
Data centers training AI models are consuming so much memory that smartphones and laptops are about to get 20% more expensive. The AI boom's hidden inflation tax.
Nvidia Just Proved the AI Boom Is Real
Record $43B profit, Blackwell chips sold out, markets rally. The numbers don't lie—AI just moved from hype to infrastructure.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.