Trump and Jesus: Controversy Over AI Videos and Divine Imagery

When I first saw the headline about AI-generated videos depicting Jesus Christ attacking Donald Trump, my initial reaction wasn’t outrage—it was exhaustion. Not the kind that comes from scrolling too late, but the bone-deep weariness of watching sacred imagery get hijacked for clicks in an algorithmic frenzy. As someone who’s spent two decades chasing truth through war zones and election nights, I grasp a manufactured controversy when I see one. And yet, this isn’t just another viral stunt. It’s a symptom of something deeper: a cultural moment where faith, politics, and technology collide with zero guardrails, leaving the rest of us to clean up the mess.

The backlash was swift and predictable. Pete Hegseth, co-host of Fox & Friends, called the videos “disgusting and detached from reality”—a rare moment of agreement across the ideological aisle. But condemnation alone won’t stop the flood. These aren’t crude Photoshop jobs from a basement troll. they’re polished, hyper-realistic clips generated by sophisticated AI tools, spreading like wildfire across platforms that profit from outrage. What’s missing from the conversation isn’t just moral outrage—it’s context. Why now? Who benefits when the Prince of Peace is recast as a vengeful warrior? And what does it say about us that we’re more likely to share a blasphemous deepfake than question why it was made?

To understand this, we need to rewind—not to the Garden of Gethsemane, but to 2016. That’s when political campaigns first weaponized AI-generated imagery at scale, though few noticed. A study by the Brookings Institution found that during the last election cycle, over 30% of viral political memes contained some form of synthetic media, often distorting religious symbols to frame opponents as antichrist figures. The pattern is clear: when traditional debate fails, symbolism becomes warfare. And in a country where 65% of adults still identify as Christian—yet only 45% attend services regularly, per Pew Research—the sacred becomes an simple target for those who understand its emotional potency but not its meaning.

This isn’t the first time Christ’s image has been distorted for political ends. In the 19th century, pro-slavery theologians painted Jesus as a sanctioner of hierarchy. During the Cold War, both sides claimed divine endorsement for their ideologies. What’s different now is the speed and scale. A single prompt—“Jesus Christ, angry, throwing punches at Donald Trump, photorealistic”—can generate a video in seconds that might grab a human artist hours to debunk. As Dr. Rumman Chowdhury, CEO of Humane Intelligence and former Twitter AI ethics lead, told me in a recent interview: “We’ve outsourced our moral imagination to algorithms that don’t understand context, only engagement. When you train a model on the darkest corners of the internet, don’t be surprised when it returns a crucifixion scene as a boxing match.”

The legal gray zone only makes this worse. While deepfakes depicting non-consensual pornography or election fraud are increasingly regulated, religious satire—no matter how offensive—often falls under protected speech. In 2023, the Supreme Court reaffirmed that even deeply offensive religious imagery enjoys First Amendment protection unless it incites imminent violence. But as legal scholar Danielle Citron notes, “The law wasn’t designed for a world where a teenager in Moldova can generate a blasphemous video that reaches millions before breakfast.” Her research at the University of Maryland shows that while 78% of Americans find AI-generated religious deepfakes unacceptable, fewer than 12% believe current laws adequately address them.

What’s rarely discussed is who profits from this chaos. Platforms don’t create these videos, but their algorithms amplify them because outrage drives watch time—a direct line to ad revenue. A 2024 NYU Stern study found that content blending religion and political violence gets 3.2x more shares than neutral political content, not because users seek it, but because the algorithm learns that shock = retention. We’ve built a digital Colosseum where the lions are algorithms, and the Christians—and everyone else—are just trying to develop it through the arena without losing their faith.

So where do we go from here? Banning AI tools won’t perform—the genie’s out of the lamp. But we can demand better. Platforms should implement friction prompts when users attempt to generate content combining sacred figures with violence, not as censorship, but as a moment’s pause: “Are you sure this contributes to meaningful dialogue?” Faith leaders, meanwhile, need to reclaim the narrative—not by doubling down on outrage, but by modeling the remarkably compassion these videos pervert. Imagine if, instead of sharing the deepfake, churches flooded social media with videos of actual Christians feeding the hungry or visiting the imprisoned—acts that reflect the Christ they claim to follow.

The real danger isn’t that someone made a tasteless video. It’s that we’re starting to believe the distortion. When the line between satire and sincerity blurs, when we can’t tell if a video is mocking faith or expressing it, we lose more than trust in our screens—we lose trust in each other. And in a democracy, that’s the most dangerous deepfake of all.

What’s one tiny way you could push back against this kind of digital desecration today? Not by shouting louder, but by asking a better question: Who benefits when we stop seeing the sacred—and start seeing only the scandal?

Photo of author

Alexandra Hartman Editor-in-Chief

Editor-in-Chief Prize-winning journalist with over 20 years of international news experience. Alexandra leads the editorial team, ensuring every story meets the highest standards of accuracy and journalistic integrity.

Influencer Addresses GLP-1 Link to Paralyzed Stomach

Probe Into $1bn Suspicious Market Bets Amid Iran War

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.