Families sue OpenAI, alleging chatbot aided in Canadian school shooting

Plaintiffs accuse OpenAI of not alerting authorities to threat signs, leading to a school shooting in February.

Save

OpenAI CEO Sam Altman apologised to the victims last week for not flagging the account to the police
OpenAI CEO Sam Altman apologised to the victims last week for not flagging the account to the police [File: Jose Luis Magana/AP Photo]

The families of victims of a school shooting in a remote Canadian Rockies town are suing artificial intelligence company OpenAI in a United States federal court, alleging that the ChatGPT maker failed to alert police to the shooter’s alarming interactions with the chatbot.

A lawsuit filed on Wednesday on behalf of 12-year-old Maya Gebala, who was critically injured in the February shooting, is among the first of more than two dozen cases from families in Tumbler Ridge, British Columbia, in what their lawyers say represents “an entire community stepping forward to hold OpenAI accountable”.

Recommended Stories

list of 4 itemsend of list

Six other lawsuits filed in a San Francisco federal court allege wrongful death claims on behalf of five children and an educator killed in Canada’s deadliest mass shooting in years.

The cases represent the families of the five slain children targeted in the school shooting. Those include Zoey Benoit, Abel Mwansa Jr, Ticaria “Tiki” Lampert, Kylie Smith, all 12, and Ezekiel Schofield, 13, as well as education assistant Shannda Aviugana-Durand.

Jesse Van Rootselaar, whose interactions with ChatGPT are at the centre of the lawsuits, shot her mother and stepbrother at home before killing an educational assistant and five students aged 12 to 13 at her former school on February 10 , according to police. Van Rootselaar, who was 18, then died by suicide. Twenty-five people were also injured in the attack.

An OpenAI spokesperson called the shooting “a tragedy” and said the company has a zero-tolerance policy for using its tools to assist in committing violence.

 

“As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators,” the spokesperson said in a statement.

Advertisement

CEO Sam Altman sent a letter last week formally apologising to the community that his company did not notify law enforcement about the shooter’s online behaviour.

The cases are part of a growing wave of lawsuits accusing artificial intelligence companies of failing to prevent chatbot interactions that plaintiffs say contribute to self-harm, mental illness and violence. They appear to be the first in the US to allege that ChatGPT played a role in facilitating a mass shooting.

Jay Edelson, who is representing the plaintiffs, said he plans to file another two dozen lawsuits in the coming weeks against the company on behalf of others affected by the shooting.

According to one complaint, OpenAI’s automated systems in June 2025 flagged ChatGPT conversations in which the attacker described gun violence scenarios.

Safety sidelined

Safety team members recommended contacting police after concluding she posed a credible and imminent threat of harm, the complaint said, citing a Wall Street Journal article from February about the company’s internal discussions.

But Altman and other OpenAI leadership overruled the safety team, and police were never called, the lawsuit alleges. The shooter’s account was deactivated, but she was able to create a new account and continue using the platform to plan her attack, the lawsuit says.

 

Following the Wall Street Journal report, the company said the account was flagged by systems that identify “misuses of our models in furtherance of violent activities” but did not meet its internal criteria for reporting to law enforcement.

The lawsuits allege “the victims didn’t learn this because OpenAI was forthcoming, but because its own employees leaked it to The Wall Street Journal after they could no longer stomach the company’s silence.”

In a blog published on Tuesday, OpenAI said it trains its models to refuse requests that could “meaningfully enable violence” and notifies law enforcement when conversations suggest “an imminent and credible risk of harm to others”, with mental health experts helping assess borderline cases. The company said it continually refines its models and detection methods based on usage and expert input.

The lawsuits seek an unspecified amount of damages and a court order requiring OpenAI to overhaul its safety practices, including mandatory law enforcement referral protocols. One of the victims originally filed her lawsuit in a Canadian court but dismissed it to pursue claims in California, Edelson said.

The lawsuits follow similar cases filed in US state and federal courts in recent months that alleged ChatGPT facilitated harmful behaviour, suicide, and, in at least one case, a murder-suicide.

Advertisement

The cases, still in early stages, are expected to test what role an AI platform can play in promoting violence and whether companies can be held liable for user actions. OpenAI has denied the claims, arguing in the murder-suicide case that the perpetrator had a long history of mental illness.


Advertisement