As Ofcom’s investigations continue, the question is not simply what action will be taken in this instance. It is whether the UK can move beyond a cycle of reactive regulation, where women and girls must wait for the next tipping point before their harms are addressed.
The recent discussions surrounding the Grok chatbot on X has reignited urgent questions about how and when the UK takes technology-facilitated violence against women and girls (TFVAWG) seriously. While the latest episode prompted political commentary and renewed regulatory scrutiny, it did not reveal a new problem. Rather, it exposed once again how harms that disproportionately affect women and girls are allowed to persist until public outrage and media headlines make inaction untenable.
The responsibility to launch an investigation into technology companies’ compliance with their legal duties lies with Ofcom, the statutory regulator of the Online Safety Act 2023 (OSA).
Ofcom has opened a formal investigation under the OSA into whether X’s use of the Grok AI chatbot hosted on X complied with its legal duties to protect users from illegal and harmful content, following reports that the tool was used to create and share sexually explicit and illegal imagery of women and girls. Ofcom confirmed that its investigation is ongoing and will assess compliance with duties such as risk assessment and removal of illegal content, with the possibility of fines or court-ordered measures if breaches are found.
This followed days of discussion in the media and in Government, with correspondence between Committee Chair’s raising concerns with the Secretary of State for Science, Innovation and Technology, Liz Kendall and the Chief Executive of Ofcom, Melanie Dawes, about the gaps in the OSA and for a more robust response to all tech companies who host similar features to Grok. The Prime Minister Keir Starmer further lamented the outrage by calling the actions of Grok and X ‘disgusting and shameful’ at PMQ’s.
A long-signalled problem, not a sudden crisis
The suggestion that the Grok issue suddenly emerged in the last couple of weeks without warning does not withstand scrutiny. Reports about the misuse of Grok and of other generative AI tools to create abusive, sexualised and violent imagery of women and girls predate the last fortnight. To suggest so would undermine the victims and campaigners, researchers, journalists and civil society organisations, who have been raising alarms for some time about the capacity of these systems to generate material that would be unlawful in the wrong hands.
More broadly, the use of AI to generate abusive images of women and girls has been extensively documented. The Internet Watch Foundation’s 2024 report found that over 98% of AI-generated child sexual abuse material identified involved girls, underscoring not only the scale of the problem but its deeply gendered nature. This was not presented as a speculative future risk, but as an existing and escalating harm.
Against this backdrop, the Grok controversy should be understood not as an outlier, but as part of a wider pattern of technological misuse that regulators and lawmakers have been warned about repeatedly.
Why does it still take a tipping point?
This leads to a familiar and troubling question: why does it take a high-profile moment to catalyse change for women and girls?
Issues relating to online abuse, non-consensual intimate images (NCII), and wider forms of TFVAWG have been the subject of sustained critique from the women’s support sector, academia, international organisations and parliamentary committees for years. Gaps in the law and in regulatory guidance have been openly acknowledged by many.
Yet law reform has consistently moved slowly. The OSA was years in the making, and its implementation remains incremental. Ofcom’s guidance on VAWG-related harms has improved, but concerns remain about whether it is sufficiently agile to respond to rapidly evolving technologies such as generative AI, and the good-will of companies cooperating with the regulator. Importantly, Ofcom’s guidance remains voluntary despite the long standing calls from women’s organisations and academics for a mandatory code of practice.
What the Grok episode illustrates is not a lack of knowledge, but a lack of urgency. Routine abuse experienced by women online has rarely been sufficient to push change over the line. Instead, it is only when harm becomes spectacular, extraordinary, visible, and politically uncomfortable that decisive action seems to follow.
Regulation after harm has occurred
Ofcom’s current investigations are both necessary and welcome. However, the timing of this activity raises legitimate questions about regulatory responsiveness. A system that acts primarily in reaction to crises risks embedding a culture of harm tolerance, where abuse is effectively permitted until it attracts sufficient attention.
If the OSA is to fulfil its promise, Ofcom must be willing to act preventatively, not merely responsively. That means using its existing powers to anticipate and mitigate risks posed by emerging technologies, rather than waiting for their harms to be incontrovertibly demonstrated in public.
Law reform and the limits of legislative catch up
The forthcoming commencement of the new offence introduced in the Data (Use and Access) Act 2025 represents an important development. It reflects growing recognition that AI-generated abuse can be as harmful as material produced through other means. Why this statute sat on the lawbooks for months before coming into force, despite the new VAWG Strategy and the pledged commitment of the Labour Government to halve VAWG in the next decade, is another question to urgency.
Yet this reform does not close all gaps. Questions remain around the criminalisation of possession of deepfake and NCII material, enforcement pathways, the capability and readiness of the criminal justice system to operationalise this commitment and the practical burdens placed on victims seeking redress. As with previous reforms, the law has arrived after the harm has already become entrenched.
Human agency and platform responsibility
It is also essential not to obscure the role of human agency from the discussion. AI systems do not independently decide to generate abusive imagery. They respond to prompts and those prompts are overwhelmingly input by men.
This mirrors what we already know about violence against women and girls offline, that digital spaces do not create misogyny, but they provide new avenues through which it can be expressed, normalised and amplified. The use of generative AI to target women and girls is therefore not accidental.
X’s Elon Musk’s reaction to the outbreak of this scandal, of indifference and upgrading the use of harmful tools hosted by Grok to a premium product (therefore monetising the harm), is also a bitter reminder of the Government’s lack of action to meaningfully call out and tackle the profitability of online harms sustained by women and girls.
Equally concerning is how platform leaders have sought to frame regulatory intervention as ‘censorship’ or an attack on freedom of expression. This rhetoric, exemplified by repeated statements from X’s owner, positions harm prevention as ideological overreach, while downplaying the real and foreseeable consequences for women and girls.
Towards a more resilient response
The Government’s recently published Strategy to tackle VAWG acknowledges the clear link between online and offline violence. However, its treatment of technology-facilitated abuse remains limited, with a tendency to focus on sexualised harms at the expense of the broader spectrum of gender-based violence enabled by digital tools.
Meanwhile, parliamentary consensus is emerging that the scale and speed of these harms constitute a crisis. The challenge is whether the current regulatory framework and body can respond with the urgency that consensus demands.
A more transformative approach
Does the system therefore need total transformative overhaul? One potential way forward lies in proposal for an independent TFVAWG observatory. An observatory model could provide specialist, centralised oversight of online gender-based harms, offering victims clearer reporting routes, generating real-time evidence for regulators, and supporting more proactive enforcement.
Crucially, it would recognise that TFVAWG is not an ancillary issue to be addressed only when scandals erupt in headlines, but a systemic problem requiring sustained, expert attention to prevent harm from happening.
As Ofcom’s investigations continue, the question is not simply what action will be taken in this instance. It is whether the UK can move beyond a cycle of reactive regulation, where women and girls must wait for the next tipping point before their harms are addressed.
If the Grok episode tells us anything, it is that waiting for outrage is itself a regulatory failure, one with predictable and preventable consequences.