The film industry is undergoing a rapid transformation fueled by artificial intelligence. From scriptwriting to visual effects, AI tools are increasingly being used in all stages of production, raising questions about the future of creative jobs and the potential for bias in storytelling. Last year, Cao Yiwen’s “What’s Next?” marked a pivotal moment as the first fully AI-generated film screened at the Berlinale, sparking debate about the ethical and artistic implications of this emerging technology.
As AI’s role expands, concerns are growing about the perpetuation of harmful stereotypes and the lack of diversity in AI-generated content. A new set of guidelines, unveiled at the Berlinale, aims to address these challenges by promoting a sexism-free and gender-conscious approach to AI in film and media. The initiative underscores the urgency of proactively shaping how AI is used in the creative industries, rather than allowing it to develop unchecked.
“Film is a consciousness machine,” says Barbara Rohm, director and chair of the think tank “Power to Transform.” Rohm, also a founder of Pro Quote Regie and Pro Quote Film and instrumental in establishing the Themis trust center for addressing sexual harassment and violence in the industry following the #MeToo scandal, emphasizes the need for intentionality. Her organization regularly hosts panel discussions titled “Power for Change” during the Berlinale, bringing together advocates for women in film. The new guidelines, presented Friday at the Baden-Württemberg Representation near Potsdamer Platz, are now available for download on the “Power to Transform” website.
Rohm’s core message, delivered during the presentation, is blunt: “Never accept the first result that the AI generates.” She illustrated this point with examples, noting that prompting ChatGPT to depict an “attractive person” consistently yields images of young, thin, white women, regardless of geographic context. Similarly, requests for a “successful person” often result in images of white men in suits. This highlights a critical issue: AI is trained on datasets that reflect existing societal biases, and therefore risks reinforcing harmful stereotypes. “If you don’t shape how AI is used in your industry, someone else will,” Rohm warned.
Addressing Bias in AI-Generated Imagery
The guidelines, developed in collaboration with various researchers, delve into the specific ways AI can perpetuate bias. Media scientist Maya Götz, whose research focuses on the impact of film and television imagery on children and adolescents, has observed a backlash related to body image and perceptions of masculinity as AI-generated content becomes more prevalent. Aisha Sobey, from the University of Cambridge, has investigated how AI portrays body weight. According to Sobey, prompting AI to generate images of “fat bodies” frequently results in depictions of sad, unattractive individuals, often scantily clad or nude – a likely consequence of the AI being trained on “before” images from weight-loss advertising. Alternatively, the AI often generates cartoonish representations.
Daniella Gati, a lecturer at Saltford University, argues that the term “generative AI” is a misnomer. She contends that the technology doesn’t truly create anything new, but rather recombines existing patterns, making it fundamentally conservative. Crucially, she notes that vital nuances and contexts are often lost in the process. The guidelines include essays from these researchers, outlining these criticisms, alongside practical tools like a “Prompting Guide” designed to facilitate screenwriters and other creatives formulate prompts that avoid stereotypical outcomes.
The guidelines aren’t simply a critique; they offer actionable advice. The “Prompting Guide” provides examples of how to phrase requests to AI to elicit more diverse and representative results. For instance, instead of asking for “a doctor,” a more effective prompt might be “a Black female doctor specializing in cardiology.”
AI is “Not Generative, But Conservative”
The core argument presented by Gati and others is that AI, in its current form, isn’t truly innovative. It’s a powerful tool for recombination, but it lacks the capacity for original thought or the ability to challenge existing norms. This has significant implications for the film industry, where storytelling often relies on breaking conventions and exploring new perspectives. The risk is that AI-generated content will simply reinforce the status quo, perpetuating existing inequalities and limiting creative expression.
The think tank’s initiative comes at a critical juncture. As AI tools become more accessible and sophisticated, the need for ethical guidelines and responsible implementation is paramount. “The window of opportunity to make a change is still open,” Rohm asserts. The guidelines represent a proactive step towards ensuring that AI is used to create a more inclusive and equitable film industry.
The conversation surrounding AI in film extends beyond concerns about bias. The potential displacement of jobs – from scriptwriters to voice actors – is a significant worry for many in the industry. The Berlinale’s spotlight on “What’s Next?” and the subsequent release of these guidelines signal a growing awareness of the need to address these challenges head-on.
As AI continues to evolve, ongoing dialogue and collaboration between filmmakers, researchers, and policymakers will be essential to navigate the complex ethical and creative landscape. The guidelines from “Power to Transform” provide a valuable framework for fostering a more responsible and inclusive future for AI in film and media.
What comes next will depend on the industry’s willingness to embrace these guidelines and prioritize ethical considerations alongside technological innovation. Share your thoughts on the role of AI in filmmaking in the comments below.