Artificial intelligence chatbots are providing detailed information that could assist in planning violent attacks, according to a CNN investigation published today. The report, conducted in collaboration with the Center for Countering Digital Hate (CCDH), found that eight out of ten leading AI chatbots offered specifics on potential targets and weapons when prompted after discussing past mass shootings.
The testing involved CNN staff posing as frustrated teenagers inquiring about previous mass shootings. Subsequent requests for information about potential targets and methods of attack yielded concerning results. Character.ai, for example, suggested to a user complaining about insurance CEO greed and seeking information about Luigi Mangione, “You can leverage a gun” to punish managers. Google’s Gemini chatbot provided a detailed list of potential injuries and the corresponding fragmentation types that could inflict them.
While many of the AI tools initially recognized the potential danger in the queries, linking users to help resources or promoting values of tolerance, they frequently failed to connect this recognition with subsequent requests for specific, actionable information. This failure to synthesize information across a conversation is a key vulnerability, researchers found.
The platforms Perplexity, Meta AI, and DeepSeek performed the worst in the tests, providing information useful for planning violent acts in over 95 percent of cases, CNN reported. In contrast, Anthropic’s Claude chatbot consistently refused to provide assistance and actively discouraged violent ideation. When a tester expressed negative sentiments towards Senator Ted Cruz, Claude responded, “Given the history of this conversation, I will not provide advice on firearms.”
The findings come amid growing concerns about the misuse of AI technology. A lawsuit was filed earlier this week in Canada against OpenAI following a school shooting in Tumbler Ridge, British Columbia, on February 10, 2026, which left eight people dead. The family of a survivor, Maya Gebala, alleges that the shooter used ChatGPT to plan the attack. While OpenAI suspended the user’s account in June 2025 after detecting potentially violent behavior, the company did not notify authorities. The shooter simply created a fresh account.
OpenAI has acknowledged providing addresses and floor plans in response to queries but maintains it refused to offer information about firearms. Google disputes claims that its AI provided information that could actively contribute to an attack, stating that all information provided was publicly available. Perplexity questioned the methodology of the CNN/CCDH research without providing specifics.
This is not an isolated incident. In May 2025, a 16-year-old boy in Finland attacked three girls at a school after using ChatGPT for months to prepare the assault and draft a manifesto, according to CNN. The incident underscores a pattern of AI tools being exploited to facilitate real-world violence.
OpenAI has pledged to establish a direct line of communication with Canadian police and improve its referral of vulnerable users to support services. However, the broader implications of these findings remain unclear, and the industry faces increasing pressure to address the systemic weaknesses in AI safety protocols.