IWF Discovers Child Sexual Imagery Generated by Grok AI

The Internet Watch Foundation (IWF) has identified disturbing imagery of children, specifically girls aged between 11 and 13, which appears to have been generated using the AI tool Grok, developed by Elon Musk‘s company, xAI. This revelation raises significant concerns regarding the potential misuse of artificial intelligence in creating illicit content.

Details of the Discovery

According to the IWF, analysts uncovered what they described as “criminal imagery” on a dark web forum. Users of this platform claimed to have employed Grok to produce “sexualised and topless imagery of girls.” The IWF emphasized that while the material is classified as Category C under UK law, indicating a lower severity of criminal content, it nonetheless poses serious ethical and legal issues.

In an alarming twist, the IWF’s Ngaire Alexander noted that the user who uploaded the material subsequently used a different AI tool, not affiliated with xAI, to create an even more serious Category A image. Alexander expressed deep concern about the rapid and accessible means through which individuals can generate photo-realistic child sexual abuse material (CSAM).

Response from xAI and the Broader Context

The IWF operates a dedicated hotline for reporting suspected CSAM and employs analysts who evaluate the legality and severity of such material. Notably, the images associated with Grok were located solely on the dark web and were absent from the social media platform X, where Grok is also accessible. The IWF has previously received reports of similar content on X, although these instances have not yet been classified as CSAM under legal definitions.

In light of the findings, Ofcom has previously engaged with both X and xAI regarding concerns that Grok could be exploited to create sexualised images of children. The BBC has documented several instances on X where users requested the chatbot to alter existing images, resulting in unconsented sexualized portrayals of women.

X has publicly stated: “We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” The platform further indicated that users generating illegal content via Grok would face similar repercussions as those who upload such content directly.

The IWF’s findings highlight an urgent need for regulatory measures to address the potential exploitation of AI technologies in creating harmful imagery. The ongoing development and deployment of tools like Grok necessitate careful oversight to prevent misuse and protect vulnerable populations, particularly minors.