OpenAI commits to enhancing safety measures and alerting law enforcement more promptly about credible threats, as outlined in a letter to Canadian authorities.
Response to 2025 Mass Shooting Incident
Canadian officials summoned OpenAI executives following the discovery that the company banned an account linked to the Tumbler Ridge, British Columbia mass shooting suspect in 2025 without notifying authorities. Company leaders have met with Canadian officials, and British Columbia Premier David Eby confirmed that OpenAI CEO Sam Altman agreed to a meeting.
Key Policy Updates
Ann O’Leary, OpenAI’s vice president of global policy, detailed plans to improve detection systems, preventing banned users from creating new accounts. The suspect’s initial account faced suspension for “potential warnings of committing real-world violence,” yet the individual established a second account. OpenAI identified this only after authorities released the shooter’s name and subsequently informed officials.
Under the new protocols, OpenAI will report “imminent and credible” threats detected in ChatGPT interactions to authorities, regardless of whether users specify a target, means, or timing for violence. O’Leary noted that these measures, if active in 2025, would have prompted police notification at the time of the original ban.
The company also plans to designate a dedicated contact for Canadian law enforcement to facilitate rapid information sharing.
Government Stance and Future Outlook
Canadian authorities view OpenAI’s initial failure to report the suspect’s account as a significant lapse. They have warned of regulating AI chatbots unless developers implement robust user safeguards. Details remain unclear on whether these changes will extend to the US or other regions.