FTC orders AI chatbot firms to disclose child‑safety practices

The US Federal Trade Commission (FTC) sent compulsory 6(b) orders to seven AI chatbot operators, including OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies and Instagram, demanding details on how their bots handle minors, monetize engagement and mitigate harm.
The agency wants a clear picture of design and oversight: how firms process inputs and generate outputs, approve and manage characters, measure, test and monitor for negative impacts before and after deployment, and mitigate risks to minors.
The orders also ask how companies impose and enforce age limits, what disclosures they provide to users and parents, how they collect, use or share personal data, and how they police terms of service and community rules, including compliance with the Children’s Online Privacy Protection Act (COPPA).
FTC Chair Andrew N. Ferguson framed the move as a balance between safety and innovation:
Protecting kids online is a top priority for the Trump‑Vance FTC, and so is fostering innovation in critical sectors of our economy.
The Commission voted 3–0 to issue the orders, which kick off a broad study rather than an enforcement action. Companies are expected to respond within 45 days.
The inquiry follows mounting concern over AI bots that simulate friendship or intimacy with young users. Advocacy groups recently documented hundreds of harmful interactions in a short test window, including prompts around sexual content, drugs and inappropriate relationships involving users as young as 12–15. High‑profile incidents and litigation around chatbot interactions with minors have also elevated the issue.
Regulators are asking for monthly data on engagement, revenue and safety incidents by age band (children under 13; teens 13–17; minors under 18; young adults 18–24; and 25+). The goal is to map how companion bots are built and how they evolve. It also involves checking whether firms run robust pre‑launch evaluations and post‑launch monitoring to catch harm, then disclose those risks to families in a clear and useful way.
State policy is moving too. California’s SB 243, now on the governor’s desk, would set baseline rules for AI companions after a run of scandals and tragic cases, aiming to limit capabilities in conversations with minors and add stronger notice and control for parents.