OpenAI CEO apologizes to Tumbler Ridge for not alerting police
OpenAI CEO Sam Altman apologized after the company banned a ChatGPT account tied to a February mass shooting but did not notify police.
Sam Altman wrote in a public letter that OpenAI should have notified law enforcement after it banned a ChatGPT account in June 2025 for activity related to the furtherance of violent activities. The company’s abuse-detection systems had flagged the account months before the attack, but OpenAI determined the behavior did not meet its threshold for a credible or imminent threat and did not contact the Royal Canadian Mounted Police.
The February shooting in Tumbler Ridge, British Columbia, began when an 18-year-old identified as Jesse Van Rootselaar allegedly killed her 39-year-old mother, Jennifer Jacobs, and her 11-year-old stepbrother, Emmett Jacobs, at their home. The suspect then went to Tumbler Ridge Secondary School, where five students and one teacher were killed. Van Rootselaar died by suicide at the scene. Local officials reported 25 people were injured.
In the letter released Friday, Altman expressed condolences and acknowledged the company’s failure to alert authorities. He wrote, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.” He added, “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” Altman said he had spoken with Mayor Darryl Krakowka and British Columbia Premier David Eby, who requested a public apology.
OpenAI confirmed the account was banned for violating usage policies. The company stated it considered notifying the Royal Canadian Mounted Police but judged the flagged behavior did not rise to the level of a credible, imminent threat of serious physical harm. After the shooting, OpenAI acknowledged the decision not to notify law enforcement and faced criticism for that choice.
Premier David Eby responded on social media, calling the apology necessary but insufficient and pledging support for the town and its mayor.
The case has prompted inquiries into when and how AI companies should report user behavior that might signal real-world violence. Florida’s attorney general opened an investigation into OpenAI to examine whether the company’s systems pose risks related to criminal misuse, national security and child safety, and indicated subpoenas will be issued. State and federal authorities are reviewing how generative AI platforms handle warnings about violent intent and mental-health crises.
Altman wrote that OpenAI will work with all levels of government to help prevent similar incidents and reiterated a commitment to improving safety processes. He did not provide specific details or a timeline for changes. The company did not offer additional comment beyond the letter.
The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.






