California tightens kids’ online safety, vetoes broad chatbot ban

Photo - California tightens kids’ online safety, vetoes broad chatbot ban
California just drew a sharper line around kids’ online safety, without banning chatbots outright. Governor Gavin Newsom signed a package targeting age verification, social‑media warnings and companion‑chatbot safeguards, while vetoing a broader bill he called overly sweeping. Deepfake distributors now face civil exposure up to $250,000.
Governor Gavin Newsom signs a stack of bills that harden online protections for minors. The thrust is procedural: make products disclose AI, build crisis‑response basics and wire in age signals from the operating system and app stores so protections trigger consistently. On the same day, he vetoes a wider child‑chatbot curb as too broad for legitimate uses in education and support.
Companion‑chatbot rules center on transparency and safety. Platforms must say interactions are AI‑generated, add break reminders for minors, and block sexually explicit images for under‑18s. They must maintain protocols to detect and route self‑harm signals and share both the playbook and statistics with public health authorities. Chatbots cannot present themselves as licensed health professionals.

Age verification moves upstream. Instead of each app improvising, operating systems and app stores pass an age‑appropriateness signal that apps can honor to gate features and content. The aim is uniform triggers, fewer privacy‑invasive checks and clearer accountability paths when safeguards fail.

The enforcement bite lands in civil court. Victims, including minors, can seek up to $250,000 per action from third parties who knowingly facilitate the distribution of non‑consensual explicit deepfakes. The number frames the potential exposure; it appears once and sets the scale.

Politics cut through the optics. By signing the “mechanical” guardrails and vetoing the sweeping curb, Sacramento signals a line: constrain behaviors and add an auditable process rather than banning an entire category. That choice also aligns with a separate move to close the “the AI acted autonomously” escape hatch in civil cases, making it harder to offload blame onto a model.

At the federal level, a contrasting track emerges. The RISE Act in the U.S. Senate would grant conditional civil liability immunity to some AI developers. California is tightening accountability while Washington considers loosening it for certain builders. Whether those tracks collide will depend on preemption language and how narrowly each regime is scoped.

Two brief lines capture the official stance. Newsom says California can lead in AI “but we must do it responsibly – protecting our children every step of the way.” First Partner Jennifer Siebel Newsom adds that leadership includes “setting limits when it matters most.” 

For developers and platforms, the to‑do list is tangible. Add AI disclosures in chat; build and document crisis‑escalation flows; wire in OS/app‑store age signals; and review deepfake moderation and response. None of it bans AI outright; it demands a proofable process. The near‑term friction sits in implementation details, guidance from agencies and how quickly app ecosystems adopt the new signals without breaking mixed‑audience experiences.