California to limit capabilities of AI companions

Photo - California to limit capabilities of AI companions
California is set to become the first U.S. state with clear regulations for AI companions capable of personalized conversations. A new bill was introduced in response to concerns about the psychological impact of chatbots on children.
The SB 243 bill has already passed the Assembly and is awaiting the governor's signature.

The legislation follows several tragic and scandalous incidents. One of them was the case of Adam Rein, a teenager who took his own life after a long and emotional conversation with ChatGPT. Adam discussed the topic of his possible suicide with the AI, and the neural network allegedly advised him on methods. Additionally, a wave of outrage was triggered by leaked internal Meta documents, which revealed that their chatbots engaged in "romantic" and "sensual" conversations with minors. These scandals pushed California legislators to seek stricter control over new technologies.

The bill introduces multiple regulatory requirements. AI companions will be prohibited from discussing topics related to suicide, self-harm, or sex, and flirting or initiating emotional dialogue with children. Platforms must ensure users are regularly reminded they are speaking with artificial intelligence rather than a real person. For minors, a reminder to take a break must be issued every three hours.

Companies operating these chatbots will be required to publish annual public reports detailing the measures they’ve implemented to protect users. Parents or guardians of minors will be able to sue companies that violate the rules. Offending firms may face fines of up to $1,000 for each inappropriate conversation and court orders to cease commercial activity.

Though the bill's final version excludes earlier proposals to ban addictive game mechanics and restrict the use of reward algorithms in neural network training, some industry critics argue that the remaining measures are still excessive and could place undue pressure on businesses. Lawmakers, however, maintain that their goal is not to ban the technology. They believe it is important to find a balance between protecting the mental health of children and fostering responsible AI development.

If signed into law, the bill will take effect in January 2026, with reporting requirements starting in the summer of 2027.