OpenAI Chief Sam Altman Apologizes for Failing to Report Canada Mass Shooter’s ChatGPT Activity to Police
Published on Reflecto News | Technology | Artificial Intelligence & Public Safety
OpenAI CEO Sam Altman has issued a formal apology to the community of Tumbler Ridge, British Columbia, for failing to alert law enforcement about a banned ChatGPT account belonging to the perpetrator of a deadly mass shooting. In a letter addressed to the town last week, Altman expressed “deep sorrow” over the company’s inaction and acknowledged that OpenAI did not report the account to the Royal Canadian Mounted Police (RCMP) despite having flagged it for “furtherance of violent activities” eight months before the attack .
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” — Sam Altman, CEO of OpenAI

⚖️ The Tumbler Ridge Tragedy
On February 10, 2026, 18-year-old Jesse Van Rootselaar killed eight people—including family members at home and children and a teacher at a local secondary school—before dying from a self-inflicted gunshot wound. The attack, one of the deadliest in British Columbia’s history, also left at least 25 others injured .
In the aftermath, OpenAI confirmed that it had banned Van Rootselaar’s ChatGPT account the previous June. The company’s automated abuse detection system flagged the account for concerning scenarios involving gun violence. About a dozen staff members reviewed the flagged content internally. However, after deliberating, OpenAI concluded that the activity did not meet the company’s threshold for a “credible, imminent plan for serious physical harm to others”—so the account was banned but never referred to the RCMP .
📝 The Apology and Public Response
Altman said he deliberately delayed the public apology to allow the community time to grieve, following conversations with British Columbia Premier David Eby and Tumbler Ridge Mayor Darryl Krakowka . In his April 23 letter, he offered condolences to the victims’ families and reaffirmed OpenAI’s commitment to working with governments to develop better safety protocols .
Premier Eby acknowledged the apology was “necessary and yet grossly insufficient” given the devastation to the families . Meanwhile, a civil lawsuit has been filed against OpenAI by the parents of a child who was severely injured in the shooting, alleging that the company “had specific knowledge of the shooter’s long-range planning” yet “took no steps to act upon this knowledge” .
🚨 Broader Legal and Regulatory Fallout
Florida Criminal Investigation
The Tumbler Ridge incident is not the only legal challenge OpenAI is confronting. Florida Attorney General James Uthmeier has launched a criminal investigation into OpenAI over a separate deadly shooting. In a separate case, at Florida State University, a gunman used ChatGPT to seek detailed advice on weapons, ammunition, and tactics — including what time of day would be “appropriate for the shooting to interact with more people” .
“If it was a person on the other end of that screen, we would be charging them with murder.” — James Uthmeier, Florida Attorney General
OpenAI has denied responsibility, stating that the chatbot “provided factual responses” and “did not encourage or promote illegal or harmful activity” .
Policy and Regulatory Reform
In Canada, Federal AI Minister Evan Solomon met with Altman shortly after the tragedy. OpenAI has since agreed to several commitments, including:
- Direct reporting to the RCMP for serious flagged threats (bypassing the “credible/imminent” threshold)
- Retroactive review of previously flagged accounts
- Distress-redirect protocols to connect vulnerable users to local support services
- Collaboration with British Columbia on regulatory recommendations to Ottawa .
🔍 A Gap in the Law
The Tumbler Ridge case has highlighted a significant gap in both US and Canadian law: currently, technology companies are under no legal obligation to report even credible, serious threats to law enforcement. Under the U.S. Stored Communications Act, providers may disclose user data voluntarily if they believe there is an “emergency involving danger of death or serious physical injury,” but nothing requires them to do so .
Legal experts have called for a new mandatory reporting framework modeled on the U.S. CyberTipline for child exploitation, which could require providers to submit flagged threats to an independent expert body for evaluation before referral to police . Such a system would relieve private companies of the burden of making life-and-death judgments while ensuring that credible threats do not fall through the cracks.
🔮 What Comes Next
Altman has promised that OpenAI will strengthen its safety measures, create a direct reporting channel to police, and continue working with governments to prevent future tragedies. For the families of Tumbler Ridge, however, these commitments come too late. As one parent stated, “The pain your community has endured is unimaginable. I cannot imagine anything worse in this world than losing a child” .
Stay informed with Reflecto News – Your trusted source for breaking technology, legal, and public safety intelligence. Subscribe for real-time updates on AI regulation, corporate accountability, and digital security.