Artificial Intelligence Minister Evan Solomon held a 30-minute virtual meeting with OpenAI CEO Sam Altman on Wednesday afternoon. Altman conveyed feelings of horror and responsibility regarding the failure to flag a ChatGPT account linked to the Tumbler Ridge, B.C., shooter prior to the mass shooting.
Safety Protocol Agreements
Altman agreed to allow Canadian experts access to OpenAI’s safety office to evaluate future threats. This includes specialists in mental health and law, as well as a comprehensive review by the Canadian AI Safety Institute of the company’s updated protocols.
OpenAI commits to reassessing past threats and directly flagging new ones to the RCMP. Solomon noted that these changes stem from the shooter’s banned ChatGPT account, which raised concerns months earlier but was not reported to police.
Upcoming Meeting with B.C. Premier
Altman plans to meet B.C. Premier David Eby on Thursday. Eby has called for an apology from OpenAI amid the company’s connections to the tragedy. When asked about a potential apology, Altman stated he would convey his message directly to the premier.
Background on the Incident
The shooter, Jesse Van Rootselaar, killed her mother and half-brother at their home before heading to the local secondary school, where she fatally shot five students and an educational assistant prior to taking her own life. The incident occurred last month in the northern B.C. community.
OpenAI disclosed a second ChatGPT account linked to Van Rootselaar after her identity became public. The company explained that the initial flagged activities did not meet its criteria for credible or imminent threats warranting police notification at the time.
Ongoing Regulatory Pressures
Federal officials face calls from opposition members and experts for stricter AI regulations following the shooting. Eby urges Ottawa to establish minimum standards for platforms to report violence threats to law enforcement.
Solomon indicates all regulatory options remain under consideration, though no specific actions have been finalized. This follows a prior meeting last Tuesday with OpenAI executives, which left Solomon disappointed due to insufficient details on promised safety improvements.
Solomon plans discussions with other platforms to verify their safety measures, though specifics on those companies remain undisclosed.