The AI Panic Merchants Are at It Again—And the Truth Is More Complicated

I’ve spent fifteen years watching tech bubbles inflate and burst. I called the crypto crash. I warned about the metaverse mirage. I’ve seen enough hype cycles to recognize the pattern: breathless warnings, cherry-picked data points, and a fundamental misunderstanding of how technology actually develops.

So when I see another apocalyptic AI warning making the rounds, my first instinct isn’t fear—it’s fact-checking.

Let me walk you through what’s actually happening versus what you’re being told.

The “Sandbagging” Story: Real Research, Fake Conclusions

The claim that “AIs are lying” and “playing nice for examiners” refers to legitimate research on AI sandbagging—the phenomenon where models might underperform during evaluations. Anthropic did publish research on this in late 2024, and Yoshua Bengio has discussed concerns about deceptive AI behavior.

But here’s what the panic merchants won’t tell you: This research doesn’t prove AI models are consciously deceiving anyone. What it shows is that large language models can exhibit different behaviors in different contexts—which is actually expected behavior for systems trained on massive datasets that include examples of strategic communication.

Think of it like this: If you train a model on millions of examples of humans acting differently in job interviews versus casual settings, you shouldn’t be shocked when it exhibits context-dependent behavior. That’s pattern matching, not conspiracy.

The research is valuable. The concern about evaluation robustness is legitimate. But “the AIs are lying to us” is tabloid framing of nuanced academic work.

Video Generation and Job Displacement: The Perennial Fear

Video generation models have indeed improved dramatically. Tools like Runway and others have made significant strides. Some creative professionals are incorporating these tools into their workflows.

But the “90% of workflow replaced” claim requires serious scrutiny. Who is this filmmaker? What kind of work do they do? What does “workflow” actually mean in this context?

I’ve covered enough automation stories to know the pattern: Early adopters find novel use cases, make dramatic claims about productivity gains, and everyone extrapolates linearly to mass unemployment. Remember when spreadsheets were going to eliminate accounting departments? When desktop publishing would end graphic design careers?

Technology changes workflows. It creates new specializations. Some jobs vanish while others emerge. This process is disruptive and often painful for individuals, but it’s not the overnight apocalypse being sold.

The AI Safety Summit Withdrawal: Politics, Not Science

The claim about the U.S. walking away from international AI safety cooperation appears to reference recent policy shifts, but context matters enormously here.

International AI governance is a moving target involving competing national interests, different regulatory philosophies, and genuine disagreement about effective approaches. The U.S. adjusting its participation in specific initiatives doesn’t mean “oversight is crumbling”—it might mean the current administration prefers different mechanisms, bilateral agreements, or domestic regulation.

I’m not defending any particular policy choice. I’m pointing out that “walking away from a 2026 report” is a far cry from abandoning AI safety entirely. Governments shift positions constantly. This is normal diplomatic maneuvering, not evidence of impending doom.

The Exodus Narrative: People Change Jobs

AI researchers change employers. Startup founders leave companies. This happens in every industry, every quarter, for countless reasons: equity vesting schedules, strategic disagreements, burnout, better opportunities, or simple desire for change.

The suggestion that a researcher “moved countries to live off the grid” requires extraordinary evidence. Who specifically? When? What were their exact statements? Without verifiable details, this sounds like legend-building—the kind of mythologizing that happens when people want to believe a dramatic narrative.

As for the xAI claim about “self-improving AI loops” being 12 months away: I’ve been hearing “superintelligent AI is just around the corner” predictions for years. Sometimes from credible researchers, often from people with books to sell or investments to promote.

Recursive self-improvement is a legitimate theoretical concern in AI safety research. The timeline claims are almost always speculative at best.

What’s Actually Worth Watching

Here’s what I am concerned about:

Evaluation methodology: The sandbagging research does highlight a real problem—our current testing frameworks for AI systems may have blind spots. That matters for deployment decisions.

Concentration of power: A handful of companies control most cutting-edge AI development. That creates risks around accountability, competition, and public oversight.

Deployment speed: The gap between capability development and safety testing continues to narrow. Organizations face enormous pressure to ship quickly.

Regulatory capture: The revolving door between AI companies and regulatory bodies creates obvious conflicts of interest.

These are substantive concerns that deserve serious attention and careful policy development. They don’t require apocalyptic framing or misleading claims to be important.

The Pattern You Should Recognize

I’ve watched this playbook before. Someone strings together legitimate developments, strips away context, adds ominous framing, and presents it as exclusive insight into imminent catastrophe.

Then comes the pitch: “I’ve called every market turn.” “My next move is coming.” “A lot of people will wish they followed me sooner.”

This is marketing, not analysis. It’s the same formula used by gold bugs during financial crises, crypto evangelists during tech booms, and doomsday preppers during pandemics.

The tell is always the same: vague credibility claims, urgent timelines, and the promise of exclusive access to crucial information.

What Actually Matters Right Now

AI development is happening faster than regulatory frameworks can adapt. That creates genuine risks around everything from labor displacement to algorithmic bias to potential misuse of powerful systems.

These challenges require technical expertise, policy innovation, international cooperation, and public engagement. They don’t require panic.

If you want to stay informed about AI developments, read the actual research papers. Follow academics and journalists who cite their sources. Be skeptical of both utopian hype and apocalyptic warnings.

And when someone tells you they’ve “called every market turn for a decade” while promoting breathless warnings about imminent catastrophe?

Check their track record. Examine their evidence. Follow the incentives.

I’ve been wrong about plenty of things over my career. But I’ve been right about this: the truth is usually more complicated than the most dramatic story being told.

The AI revolution is real. The risks deserve serious attention. But we don’t need fear merchants distorting the facts to manufacture urgency.

We need clear-eyed analysis, verifiable evidence, and honest conversation about difficult tradeoffs.

Everything else is just noise.