Home » News » AI’s Enshittification: Can It Be Stopped?

AI’s Enshittification: Can It Be Stopped?

by Sophie Lin - Technology Editor

The Coming “Enshittification” of AI: Why Your AI Recommendations Might Soon Be Worthless

A perfect Roman dinner, suggested by an AI. That’s how I found myself at Babette, a hidden gem on Via Margutta, thanks to a recommendation from GPT-5. It was a remarkable experience, not just for the food, but for the implicit trust involved. But that trust, and the utility of AI itself, is facing a looming threat: a process tech critic Cory Doctorow calls “enshittification.” And it could fundamentally change how we interact with artificial intelligence, turning a powerful tool into just another frustrating corner of the internet.

What is “Enshittification” and Why Should You Care?

Doctorow’s theory, now cemented in the cultural lexicon – the American Dialect Society named it 2023’s Word of the Year – explains how platforms inevitably degrade over time. They begin by prioritizing users, then move to appease business customers, and ultimately, exploit both to maximize profits. Think about Google search results increasingly cluttered with ads, Amazon’s marketplace overrun with sponsored listings, or Facebook’s feed prioritizing engagement-bait over genuine connection. The initial value proposition erodes, leaving users with a diminished experience. Now, consider that artificial intelligence is poised to become an even more integral part of our lives than any of those platforms.

The Unique Risks to AI’s Integrity

Unlike previous tech waves, AI isn’t just about finding information or connecting with others. It’s about decision-making. We’re already relying on AI to interpret news, guide purchases, and even offer life advice. The stakes are significantly higher. The massive costs associated with developing and maintaining these complex models – companies like OpenAI are planning to spend hundreds of billions – create immense pressure to monetize. And with a limited number of players likely to dominate the field, the conditions are ripe for exploitation.

The Inevitable Rise of Sponsored Answers?

The most immediate concern is advertising. Imagine an AI chatbot recommending products not based on merit, but on which companies paid the highest bid. OpenAI CEO Sam Altman acknowledges this possibility, stating they’re exploring “cool ad product[s]” that could be a “net win” for users. The recent partnership between OpenAI and Walmart, allowing shopping directly within ChatGPT, is a clear signal of this direction. While Perplexity AI attempts to address this with labeled sponsored results, the fundamental conflict remains: prioritizing profit over unbiased information.

Beyond Ads: The Subtle Erosion of Trust

The danger isn’t limited to blatant advertising. “Enshittification” is often more insidious. AI models could subtly favor certain viewpoints, downrank dissenting opinions, or even manipulate information to align with a company’s agenda. This is particularly concerning given AI’s growing role in shaping our understanding of complex issues. The trust we place in these systems is predicated on their objectivity, and that trust is easily broken.

Protecting Yourself in the Age of AI “Enshittification”

So, what can you do? Blind faith in AI is no longer an option. Here are a few strategies:

  • Cross-Reference Information: Don’t rely solely on AI-generated responses. Verify information with multiple sources.
  • Be Skeptical of Recommendations: Question why an AI is suggesting a particular product or service. Consider potential biases.
  • Support Open-Source Alternatives: Explore open-source AI projects that prioritize transparency and community governance.
  • Demand Transparency: Advocate for greater transparency from AI companies regarding their algorithms and monetization strategies.

The Future of AI Depends on Vigilance

The “enshittification” of AI isn’t inevitable, but it’s a very real possibility. The initial promise of AI – a powerful tool for knowledge and progress – is at risk. Protecting that promise requires a critical and informed user base, a commitment to transparency, and a willingness to challenge the status quo. The delicious meal at Babette was a glimpse of AI’s potential, but it’s a potential we must actively defend. Learn more about Cory Doctorow’s work on enshittification here.

What steps will you take to ensure AI remains a valuable tool, and doesn’t become just another source of frustration? Share your thoughts in the comments below!

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.