Ireland’s Data Protection Commission (DPC) announced on Tuesday the launch of a formal inquiry into social media platform Grokregarding personal data handling and its capacity to generate harmful sexualized imagery and video, specifically involving minors.
The DPC serves as X’s primary EU regulator because the American firm’s European headquarters are located in Ireland.
Under the General Data Protection Regulation (GDPR), the commission has the authority to impose penalties up to 4% of a corporation’s total global earnings.
X was officially notified of the probe on Monday, according to a DPC statement.
The investigation aims to determine whether
“The DPC has been engaging with XIUC (X Internet Unlimited Company) since media reports first emerged a number of weeks ago concerning the alleged ability of
“As the Lead Supervisory Authority for XIUC across the EU/EEA, the DPC has commenced a large-scale inquiry,” Doyle said, adding that this would examine XIUC’s compliance with some of its “fundamental obligations under the GDPR in relation to the matters at hand”.
Last month, Grok saturated the
While
US President Donald Trump and several administration officials have denounced EU oversight of American technology firms, characterizing the multi-billion-dollar fines levied by the 27-nation union as a masked version of taxation.
Elon Muskthe billionaire owner of
Separately, the European Commission initiated a probe on 26 January to determine if Grok facilitates the spread of illegal material, including non-consensual sexualized media, within the union.
Furthermore, on 3 February, the United Kingdom’s data protection authority commenced its own statutory investigation into Grok’s data practices and its role in producing damaging sexualized photos and video files.
Earlier this month, French prosecutors raided X’s Paris offices and summoned Musk for questioning.
The United Nations children’s agency UNICEF had also called for countries to criminalize the creation of AI-generated child sexual abuse content.
The agency urged developers to implement safety-by-design approaches and guardrails to prevent misuse of AI models.
“The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up,” UNICEF said in a statement.

