Canada’s artificial intelligence minister is raising serious concerns about the safety protocols of OpenAI and other AI platforms in the wake of the tragic shooting in Tumbler Ridge, British Columbia. The incident, which claimed the lives of eight people – including five children – has brought renewed focus to the potential risks associated with rapidly advancing AI technology and the responsibility of developers to prevent misuse.
Minister of Artificial Intelligence and Digital Innovation Evan Solomon expressed his “deep disturbance” regarding reports that concerning online activity linked to the suspect, Jesse Van Rootselaar, was flagged internally by OpenAI but not immediately reported to law enforcement. This delay has sparked a national conversation about the need for more robust safety measures and clearer escalation procedures within AI companies. The incident underscores the complex challenge of balancing innovation with public safety in the age of artificial intelligence.
The RCMP has identified Van Rootselaar as the individual responsible for the February 10th shooting at Tumbler Ridge Secondary School, before taking her own life. The story regarding her utilize of ChatGPT first surfaced in the Wall Street Journal, prompting Solomon to initiate contact with OpenAI and other AI companies to discuss their policies.
Solomon stated the federal government is actively reviewing “a suite of measures” designed to protect Canadians, with a particular focus on children. “All options are on the table to ensure that public safety and the protection of our children are the cornerstone of any technology built into these systems from the outset,” he said. This commitment signals a potential shift towards greater regulation and oversight of the AI industry in Canada.
The province of British Columbia also revealed that OpenAI did not proactively inform government officials about potential evidence related to the shooting, despite a meeting with a provincial representative on February 11th – a meeting scheduled weeks prior to the incident and focused on OpenAI’s potential expansion into Canada. The following day, OpenAI requested contact information for the RCMP, which was subsequently provided through the province’s director of policing and law-enforcement services.
OpenAI confirmed to CBC News that the account associated with Van Rootselaar was banned after its systems detected activity through automated tools and human review, identifying “misuses of our models in furtherance of violent activities.” However, the company stated that the account’s activity in June 2025 did not meet its threshold for immediate reporting to law enforcement. OpenAI proactively reached out to the RCMP with information regarding Van Rootselaar’s use of ChatGPT after the shooting occurred.
Tumbler Ridge Secondary School is pictured the day after a school shooting in Tumbler Ridge, British Columbia, on Wednesday, Feb. 11, 2026. (Ben Nelms/CBC)
Premier David Eby described the reports as “profoundly disturbing” and confirmed that police are pursuing preservation orders for any potential evidence held by digital service companies, including social media platforms and AI providers. This legal action aims to secure crucial information that could shed light on the events leading up to the tragedy.
The Tumbler Ridge shooting is considered one of the worst mass shootings in Canadian history, prompting an outpouring of grief and support across the country and internationally. The incident has reignited the debate surrounding gun control, mental health support and the role of technology in preventing future tragedies.
The focus now shifts to how AI companies can improve their safety protocols and ensure a more rapid response to potential threats. The Canadian government’s review of existing measures and consideration of new regulations will be critical in shaping the future of AI safety and accountability. The incident also raises broader questions about the ethical responsibilities of developers and the need for greater transparency in the development and deployment of AI technologies.
What comes next will likely involve increased scrutiny of AI safety protocols, potential legislative changes, and a continued dialogue between government, industry, and the public. The goal is to create a framework that fosters innovation while prioritizing the safety and well-being of all Canadians.
Share your thoughts on this important issue in the comments below.