On a rain-slicked morning in Washington, D.C., Congressman Ro Khanna stood before a packed room at the National Press Club, his voice cutting through the haze of political fatigue with a question that has begun to haunt Capitol Hill corridors: Is artificial intelligence birthing a new aristocracy untethered from accountability? The query landed not as abstract speculation but as a direct extension of a decades-old wound—the Jeffrey Epstein scandal—where wealth and influence erected a firewall so thick that even federal investigators stumbled at its gates. Today, as AI systems concentrate unprecedented power in the hands of a handful of tech titans, Khanna’s warning echoes with urgent clarity: without intervention, we are not merely witnessing economic disruption but the engineering of a legal and moral void where the powerful operate beyond the reach of law.
The concern is not hypothetical. In March 2026, the Department of Justice quietly closed its review of Epstein-associated financial networks, citing “insufficient evidence” despite possessing over 3 million pages of documents—many heavily redacted—linking financiers, tech executives, and political figures to the trafficking ring. Khanna, who has pressed for transparency since 2023, denounced the move as a continuation of a pattern where the affluent evade scrutiny. “We cannot accept a justice system that treats the names of the accused as more worthy of protection than the trauma of the survivors,” he stated during the Press Club forum, his frustration palpable. “Redacting names while leaving victims exposed isn’t protection—it’s complicity.”
This dynamic, Khanna argued, is being replicated in real time within the AI sector. Companies like NVIDIA, Microsoft, and emerging players in generative AI are amassing valuations that dwarf traditional industries, with NVIDIA alone crossing a $3 trillion market cap in early 2026. Yet, as their influence grows, so does their insulation from regulatory oversight. The Federal Trade Commission, under Chair Lina Khan, has initiated antitrust probes into AI partnerships—such as Microsoft’s $13 billion investment in OpenAI—but progress remains glacial, hampered by jurisdictional ambiguities and lobbying expenditures that exceeded $120 million across the tech sector in 2025, according to OpenSecrets data.
The parallels to the Epstein era are stark. Just as Epstein’s network exploited legal gray zones—using offshore trusts, private islands, and layered corporate structures to obscure accountability—today’s AI leaders navigate a similarly murky terrain. Training data harvested without consent, algorithmic bias embedded in hiring and lending tools, and the deployment of autonomous systems in warfare all occur under a patchwork of state-level guidelines and voluntary ethics frameworks. No federal AI regulatory agency exists, despite repeated calls from lawmakers like Senator Elizabeth Warren and Representative Ted Lieu for a dedicated bureau modeled after the FDA or FAA.
As Khanna emphasized during his exchange with Taya Graham of The Real News Network, the solution requires more than piecemeal fixes. “We necessitate a federal regulatory AI agency with real enforcement power,” he said, drawing a parallel to aviation safety. “Would you board a plane if the FAA didn’t exist? Would you trust nuclear power without the NRC? AI, which its own creators admit could reshape civilization, deserves no less.” His proposal includes mandatory impact assessments for high-risk AI systems, prohibitions on emotion recognition in workplaces and education, and a requirement that foundational models undergo third-party audits before deployment—measures already under consideration in the European Union’s AI Act, which took full effect in August 2025.
But regulation alone may not suffice. Historians and economists warn that without structural interventions, the wealth generated by AI could entrench a new gilded caste. A 2025 study by the Roosevelt Institute found that generative AI could automate up to 30% of white-collar tasks by 2030, disproportionately affecting mid-level professionals in finance, law, and consulting—sectors where Epstein’s network once found fertile ground for recruitment and influence. Simultaneously, the benefits are flowing upward: the top 1% of AI-related patent holders captured 82% of licensing revenue in 2024, per Brookings Institution analysis, while worker displacement programs remain virtually nonexistent in major tech firms.
This imbalance, Khanna insists, demands a progressive tax on extreme wealth—specifically, a 2% annual levy on net assets exceeding $50 million—to fund retraining, universal healthcare, and democratic resilience initiatives. “We taxed railroads, we taxed oil, we taxed telecommunications when they became infrastructure,” he noted. “AI is now essential infrastructure. It should contribute to the public good it disrupts.”
The stakes extend beyond economics. In a candid moment during the Press Club Q&A, Khanna connected the dots between unchecked technological power and democratic erosion. “When a handful of firms control the models that shape hiring, credit, news, and even judicial risk assessments, they don’t just influence markets—they shape sovereignty,” he said. “Citizens United already lets money speak louder than votes. Now imagine algorithms that can micro-target voters with synthetic media, optimize suppression tactics, or predict dissent before it surfaces. That’s not innovation—it’s oligarchy by code.”
His warning finds resonance in recent events. In January 2026, a whistleblower from a major AI lab revealed internal communications showing executives discussing how to delay watermarking protocols for deepfake videos until after the midterm elections, citing “market adoption risks.” The disclosure, reported by The Washington Post, triggered a Senate Judiciary Committee hearing where Senator Dick Durbin warned: “We are watching the slow-motion privatization of truth.”
Yet, amid the alarm, there are signs of resistance. Grassroots coalitions like the Algorithmic Justice League and the AI Now Institute have documented harms ranging from facial recognition failures in Black communities to wage theft via algorithmic management in warehouses. Their advocacy helped pass New York City’s AI hiring bias law in 2024, which requires annual audits of automated employment tools—a model now being replicated in Illinois and Colorado.
Internationally, the contrast is instructive. While the U.S. Lags, the EU’s AI Act has begun enforcement, issuing its first fines in February 2026 to a social media platform for deploying emotion-scanning software in workplaces without risk assessment. Canada’s Artificial Intelligence and Data Act, effective since late 2025, mandates transparency for high-impact systems and includes whistleblower protections. These frameworks, though imperfect, offer a counterweight to the laissez-faire approach dominating American policy.
Back in Washington, the debate is no longer theoretical. As AI systems grow more autonomous—making loan denials, parole recommendations, and even medical triage calls—the question of who governs the governors becomes inseparable from the broader struggle for equity. Khanna’s framing of an “Epstein class” is not merely metaphorical; This proves a diagnostic lens. It reveals how impunity, once normalized in one domain, metastasizes into others, teaching the powerful that consequences are optional.
The path forward, he insists, requires courage—not just from legislators, but from an informed public willing to demand accountability. “We don’t need to reinvent the wheel,” he said, closing his Press Club remarks. “We need to apply the lessons we’ve already learned: that power without oversight corrupts, that secrecy shields abuse, and that democracy survives only when we refuse to gaze away.”
As the nation grapples with the promise and peril of artificial intelligence, the real test may not be technological—but moral. Will we allow a new elite to rise, unchallenged and unseen, shaping lives from behind a curtain of algorithms? Or will we finally build the safeguards that ensure progress serves the many, not just the few who code the future?
What safeguards do you believe are essential to prevent AI from consolidating unaccountable power? Share your thoughts below—we’re listening.