EU investigates X over Grok deepfakes violating DSA rules

The EU has opened a formal probe into X over Grok-generated sexual deepfakes that may qualify as child sexual abuse material.

The European Commission has launched a formal investigation into X following reports that the platform’s built-in Grok AI chatbot generated and circulated sexualized deepfakes that may fall under the definition of child sexual abuse material. Regulators will assess whether the company properly identified and mitigated risks before rolling out Grok in EU countries.

EU digital policy commissioner Henna Virkkunen called the issue “a violent and unacceptable form of degradation.” The probe falls under the Digital Services Act, which imposes strict requirements for preventing illegal and harmful content and for ensuring algorithmic transparency.

Public concern around Grok has escalated in recent weeks as users in multiple countries reported cases of the chatbot creating sexualized images of women and minors. In the UK, Ofcom has already opened an investigation under the Online Safety Act, while authorities in France and India have accused the system of generating images of people without their consent.

X, owned by xAI, said it removes illegal content, blocks associated accounts, and cooperates with law enforcement. The company stressed that it maintains zero tolerance for child sexual exploitation, non-consensual nudity, and unwanted sexual content.

The probe extends the EU’s mounting pressure on X. The platform was previously fined €120 million for DSA violations ranging from misleading paid verification badges to failing to provide researchers access to data and not maintaining a proper ads repository. The new case could result in penalties of up to 6% of global annual revenue.

The situation is fueling political tension between the EU and the United States. U.S. Vice President JD Vance recently argued that European regulators are “attacking American companies under the guise of protecting free speech.” The Grok investigation risks deepening this divide, touching on one of the most sensitive areas: AI-generated content and child safety.

EU officials will now determine whether X failed to meet its obligations to manage systemic risks linked to Grok’s launch and whether adequate safeguards were in place to prevent the spread of harmful AI-generated material.

The material on GNcrypto is intended solely for informational use and must not be regarded as financial advice. We make every effort to keep the content accurate and current, but we cannot warrant its precision, completeness, or reliability. GNcrypto does not take responsibility for any mistakes, omissions, or financial losses resulting from reliance on this information. Any actions you take based on this content are done at your own risk. Always conduct independent research and seek guidance from a qualified specialist. For further details, please review our Terms, Privacy Policy and Disclaimers.

Articles by this author