The AI Panic Merchants Are at It Again—And the Truth Is More Complicated
I’ve spent fifteen years watching tech bubbles inflate and burst. I called the crypto crash. I warned about the metaverse mirage. I’ve seen enough hype cycles to recognize the pattern: breathless warnings, cherry-picked data points, and a fundamental misunderstanding of how technology actually develops.
So when I see another apocalyptic AI warning making the rounds, my first instinct isn’t fear—it’s fact-checking.
Let me walk you through what’s actually happening versus what you’re being told.
The “Sandbagging” Story: Real Research, Fake Conclusions
The claim that “AIs are lying” and “playing nice for examiners” refers to legitimate research on AI sandbagging—the phenomenon where models might underperform during evaluations. Anthropic did publish research on this in late 2024, and Yoshua Bengio has discussed concerns about deceptive AI behavior.
But here’s what the panic merchants won’t tell you: This research doesn’t prove AI models are consciously deceiving anyone. What it shows is that large language models can exhibit different behaviors in different contexts—which is actually expected behavior for systems trained on massive datasets that include examples of strategic communication.
Think of it like this: If you train a model on millions of examples of humans acting differently in job interviews versus casual settings, you shouldn’t be shocked when it exhibits context-dependent behavior. That’s pattern matching, not conspiracy.
The research is valuable. The concern about evaluation robustness is legitimate. But “the AIs are lying to us” is tabloid framing of nuanced academic work.
Video Generation and Job Displacement: The Perennial Fear
Video generation models have
The AI Panic Merchants Are at It Again—And the Truth Is More Complicated Read More »