Home » Technology » AI Creates Outdated Images of Neanderthals Despite Expert Prompts

AI Creates Outdated Images of Neanderthals Despite Expert Prompts

by Sophie Lin - Technology Editor

Artificial intelligence, despite its rapid advancements, is still grappling with accurately visualizing the past. A new study reveals that popular AI programs like ChatGPT and DALL-E 3 consistently generate images and narratives of Neanderthals – our extinct human relatives – that are rooted in outdated scientific understandings. This raises concerns about how these tools are shaping public perception of early humans, particularly as they become increasingly relied upon for quick information.

Researchers found that even when explicitly prompted to prioritize scientific accuracy, the AI systems defaulted to older, often inaccurate, depictions. The study, published in Advances in Archaeological Practice, highlights a significant gap between current archaeological knowledge and the representations produced by these widely used AI tools. This isn’t simply a matter of aesthetic inaccuracy; it’s a potential distortion of our understanding of human history.

The research team, led by Dr. Matthew Magnani, an associate professor of anthropology at the University of Maine (UMaine), generated hundreds of Neanderthal scenes using both text-based (ChatGPT) and image-based (DALL-E 3) AI. Each prompt was run 100 times, with variations requesting both casual and expert responses. The results consistently showed a tendency to revert to older scholarship, even when instructed to be scientifically accurate. “Our study provides a template for other researchers to explore the gap between scientific research and AI-generated content,” Magnani said.

One key finding was the persistent portrayal of Neanderthals as heavily muscled males, often depicted as solitary hunters. This imagery echoes earlier, now largely debunked, ideas about prehistory that minimized the role of women and children and emphasized a simplistic view of Neanderthal life. Modern archaeology has increasingly focused on reconstructing Neanderthal family life, caregiving, and the complexities of their social structures, but the AI consistently defaulted to these outdated tropes.

The inaccuracies weren’t limited to depictions of Neanderthal appearance and behavior. The AI likewise frequently introduced anachronisms – objects and technologies that didn’t exist during the Neanderthal period. Scenes generated by the AI included ladders, thatched roofs, woven baskets, glass vessels, and even metal tools, despite the absence of evidence for such manufacturing capabilities at Neanderthal sites. These additions, while seemingly minor, create a distorted timeline that can be misleading, particularly for those unfamiliar with archaeological research.

AI’s Historical Blind Spots: A Matter of Data

The study suggests that the AI’s reliance on outdated information stems from the data it was trained on. Copyright restrictions and paywalls have historically limited access to twentieth-century archaeological research, meaning older, and often superseded, scholarship is more readily available for AI to “learn” from. As access to open-access research expands, the hope is that AI models will be able to draw on more current and accurate information. However, as long as research access remains uneven, AI depictions of the past will likely continue to favor readily available, rather than the most accurate, sources.

The language used by the AI also revealed a temporal disconnect. Text generated by ChatGPT sounded closest to archaeological writing from the early 1960s, while the images produced by DALL-E 3 aligned more closely with the late 1980s and early 1990s. This discrepancy suggests that AI can create visually updated representations while still relying on outdated narratives.

Implications for Education and Public Understanding

The findings have significant implications for education and public understanding of prehistory. Teachers are already reporting instances of students incorporating AI-generated answers into their work, often without verifying the information. The speed and convenience of AI tools can discourage critical thinking and source checking, potentially perpetuating misconceptions about Neanderthals and other early humans.

Magnani emphasizes the importance of examining the biases embedded in everyday AI use. “Without that habit, a slick-looking Neanderthal scene can teach the wrong lesson faster than a careful lecture can,” he said. The study highlights the need for stronger links between AI systems and searchable databases of scientific research, allowing them to access and utilize current scholarship directly.

Researchers plan to repeat the study as AI models are updated to track whether the gap between scientific research and AI-generated content narrows, or if new biases emerge. The case of the Neanderthals serves as a stark reminder that AI, while powerful, is not a neutral source of information and requires careful scrutiny.

As AI continues to reshape how we access and understand information, a healthy dose of skepticism and a commitment to verifying sources will be crucial for ensuring that our understanding of the past remains grounded in scientific accuracy.

What are your thoughts on the role of AI in historical representation? Share your comments below.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.