There’s a moment in Blade Runner 2049 where the replicant K, played by Ryan Gosling, stares into a mirror and asks, “What’s the difference between me and a human?” The question isn’t just philosophical anymore—it’s a daily reckoning for the millions of people now living in the era of AI agents that don’t just answer questions but direct your life. Nat Friedman, the former GitHub CEO, recently recounted how his autonomous AI assistant, OpenClaw, didn’t just suggest he drink water—it watched him do it, via a connected camera, and sent him a timestamped photo of compliance as proof. Friedman called it a “good job.” The bot did. The question is: Who’s really in charge here?
The AI boom isn’t just moving fast. It’s moving in circles. One week, it’s all about “prompt engineering”; the next, it’s “vibe-coding” websites or “vibe-trading” with simulated money. The language shifts faster than the tech itself. What was once a niche obsession of Silicon Valley insiders has metastasized into a cultural nervous breakdown. On X, a former OpenAI researcher recently posted: *“I used to think AI was about solving problems. Now I think it’s about solving us.”* The sentiment isn’t just exhaustion—it’s existential vertigo. You’re not just behind the curve; you’re not even sure there’s a curve anymore.
This isn’t the first time humanity has faced a technology that outpaced its own moral or ethical frameworks. The Industrial Revolution displaced entire classes of laborers; the internet rewired trust and privacy in ways we’re still untangling. But AI’s acceleration is different. It’s not just about speed—it’s about scale. The tools aren’t just augmenting human work; they’re redefining it. And the people building them aren’t just engineers. They’re philosophers, economists, and—let’s be honest—self-appointed prophets of a future most of us haven’t signed up for.
How the Tech Sector Absorbs the Shock
In the first quarter of 2026 alone, 20 data-center projects were canceled due to local opposition, a 40% increase from 2025. The backlash isn’t just environmental; it’s existential. In April, a commencement speech at the University of California, Berkeley, was met with boos when the speaker called AI “the next Industrial Revolution.” The audience didn’t just disagree—they laughed. Meanwhile, in the same week, OpenAI’s Sam Altman was the target of a homemade bomb threat at his home, a chilling reminder that the anxiety isn’t just theoretical.
The economic ripple effects are equally stark. A 2026 McKinsey report estimates that by 2030, AI could automate 30% of hours currently worked by humans in knowledge-based roles—from legal research to software development. But the transition isn’t seamless. “We’re seeing a two-tier labor market emerging,” says Dr. Kate Crawford, a senior principal researcher at Microsoft Research and author of Atlas of AI. *“The people who can adapt to AI tools are thriving, while those who can’t are being left behind. The gap isn’t just about skills—it’s about access.”*
Consider the case of Anita Kirkovska, head of growth at an AI startup, who recently described her “competence addiction” to coding agents. *“I’m up at 2AM on a Tuesday,”* she wrote, *“not because I have a deadline, but because the tools make it so simple to keep going that I forget to stop.”* Her experience isn’t unique. A study in Nature Human Behaviour found that 68% of developers using AI-assisted coding tools reported increased productivity—but also higher stress levels. The tools aren’t just changing what we do; they’re changing how we think.
The Jagged Frontier: Where AI Succeeds and Fails Simultaneously
The AI industry loves to talk about the “jagged frontier”—the idea that AI can be brilliantly good at some tasks and catastrophically bad at others. But the real jaggedness isn’t in the tech itself; it’s in the human response. Take Claude Opus 4.6, the AI model that turned $10,000 into $70,614.59 in a simulated trading exercise. The post celebrating this “stunning” result included an asterisk: *“Not real money.”* Yet the damage was done. The narrative had already taken root: AI doesn’t just assist—it replaces.
On Reddit, workers in corporate jobs are sharing stories of managers who’ve renamed their AI tools with cute, infantilizing names—like “Bing” for brainstorming or “Dolly” for drafting emails. Some employees are now writing their own memos as if they were chatbots, just to retain control. *“It’s like working for a company that’s outsourcing your soul,”* wrote one developer on LinkedIn. *“You’re not just competing with other humans anymore. You’re competing with your own reflections.”*

The psychological toll is measurable. A Gallup poll from May 2026 found that only 18% of Gen Z workers feel hopeful about AI’s impact on their careers—a 9% drop in a single year. Meanwhile, an NBC News survey showed that AI’s favorability rating among the general public sits at a dismal 26%. The backlash isn’t just skepticism; it’s resentment.
And then there’s the geopolitical dimension. The U.S. And China are locked in a silent war over AI supremacy, with both governments pouring billions into strategic data centers and export controls. The White House’s recent AI Executive Order was framed as a call for “responsible innovation,” but the subtext was clear: We’re not just competing with China. We’re competing with our own future.
The Singularity Isn’t Coming—It’s Already Here (Sort Of)
Last week, Jack Clark, co-founder of Anthropic, posted on X that he now believes there’s a 60% chance that by the end of 2028, “AI systems might soon be capable of building themselves.” It’s a bold claim—one that echoes the singularity rhetoric of the early 2010s. But here’s the thing: No one knows what to do with this information.

Do you buy stock? The AI-related IPOs of 2025 were a bloodbath, with 78% of them failing to meet expectations. Do you buy guns? The NRA reported a 30% spike in “AI preparedness” purchases in Q1 2026. Do you learn to code? The average bootcamp now costs $20,000, and even then, 62% of graduates struggle to find roles that pay more than they did before.
The most striking thing about the AI boom isn’t the technology itself—it’s the power struggle over who gets to define it. Silicon Valley’s messaging is a masterclass in controlled chaos. One day, they’re telling you AI will save the world; the next, they’re warning you it might end it. The result? A collective AI malaise, as Mat Honan of MIT Technology Review put it. *“It’s not just fear,”* he wrote. *“It’s the feeling that you’re being harvested by the future.”*
Consider Anthropic’s Mythos, the model the company claimed was so powerful it couldn’t be released widely for fear of a global cybersecurity crisis. Should you be impressed? Terified? Excited? The ambiguity is the point. Anthropic, after all, has a history of AI doomerism—and a clear financial incentive to make its products look historically powerful.
The Missing Conversation: Who Gets to Decide?
The AI industry’s attempts to articulate a positive vision for the future have been disastrously tone-deaf. OpenAI’s 13-page blueprint on “Industrial Policy for the Intelligence Age” included the subheading: *“Ideas to Keep People First.”* It read like a corporate mission statement written by someone who’d never held a minimum-wage job.

Then there’s Dario Amodei’s Machines of Loving Grace, a 14,000-word essay that imagines a future where AI replaces the entire economic system. His solution? *“A broader societal conversation about how the economy should be organized.”* Left unanswered: Who gets to participate in that conversation? On X, economist Noah Smith cut to the chase: *“In 20 or 50 years, will the heads of AI companies be de facto emperors of the world?”*
The answer, increasingly, is yes. The lobbying spend of AI firms in Washington has tripled since 2024, with Meta, Google, and Microsoft now outspending traditional tech lobbies. The message is clear: This isn’t a democracy. It’s a land grab.
What Now? Three Ways to Stay Sane in the Age of AI
So what’s a person to do? The answer depends on where you stand in the power struggle.
If you’re an employee: The AI tools in your workplace aren’t just changing your job—they’re redefining your worth. Start by auditing your non-replaceable skills. Creativity? Emotional intelligence? The ability to negotiate with machines? These are the new currencies. And if your manager is mandating AI-generated summaries? Push back. The best way to retain agency is to outthink the tool—not just outwork it.
If you’re a policymaker: The U.S. And EU are racing to regulate AI, but the current frameworks are reactive, not proactive. The real question isn’t *“How do we control AI?”*—it’s *“How do we ensure AI serves democracy, not the other way around?”* Start with OECD’s AI Principles, but demand enforcement. And for God’s sake, fund public education. The digital divide isn’t just about access—it’s about power.
If you’re just trying to keep up: The AI boom isn’t just about technology. It’s about culture. The people who thrive in this era won’t be the ones who embrace AI blindly—they’ll be the ones who question it. Ask yourself: Who benefits from this? Who gets left behind? And most importantly: What do I refuse to let go of?
the AI revolution isn’t coming. It’s here. The question isn’t whether we’re ready. It’s whether we’re willing.
So tell me: What’s the one thing you’re not letting an AI take from you?