Lawsuits and security considerations
Character.AI was based in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised almost $200 million from buyers. Final 12 months, Google agreed to pay about $3 billion to license Character.AI’s know-how, and Shazeer and De Freitas returned to Google.
However the firm now faces a number of lawsuits alleging that its know-how contributed to teen deaths. Final 12 months, the household of 14-year-old Sewell Setzer III sued Character.AI, accusing the corporate of being accountable for his dying. Setzer died by suicide after steadily texting and conversing with one of many platform’s chatbots. The corporate faces extra lawsuits, together with one from a Colorado household whose 13-year-old daughter, Juliana Peralta, died by suicide in 2023 after utilizing the platform.
In December, Character.AI introduced adjustments, together with improved detection of violating content material and revised phrases of service, however these measures didn’t prohibit underage customers from accessing the platform. Different AI chatbot companies, comparable to OpenAI’s ChatGPT, have additionally come beneath scrutiny for his or her chatbots’ results on younger customers. In September, OpenAI launched parental management options supposed to offer dad and mom extra visibility into how their children use the service.
The circumstances have drawn consideration from authorities officers, which seemingly pushed Character.AI to announce the adjustments for under-18 chat entry. Steve Padilla, a Democrat in California’s State Senate who launched the protection invoice, advised The New York Instances that “the tales are mounting of what can go mistaken. It’s vital to place affordable guardrails in place in order that we defend people who find themselves most susceptible.”
On Tuesday, Senators Josh Hawley and Richard Blumenthal launched a invoice to bar AI companions from use by minors. As well as, California Governor Gavin Newsom this month signed a regulation, which takes impact on January 1, requiring AI corporations to have security guardrails on chatbots.
 
					
 
			 
		 
		 
		 
		 
		 
		 
		 
		 
		