If a playground were unsafe because of broken equipment or poor supervision, we would not respond by banning children from using it indefinitely. We would repair it, redesign it and ensure it could be used safely. Online spaces should be treated no differently.
In recent months, the idea of banning children under 16 from social media has moved from the margins of online safety debates into the mainstream of UK political discussion, and today a consultation by the Department for Science Innovation and Technology puts it at the forefront of the Governments mind. Australia’s decision to introduce age-based restrictions has no doubt accelerated this shift in public discourse, with politicians in the UK increasingly treating a ban as an inevitable next step.
Young people experience very real harms online. Some encounter them directly, and all must navigate digital environments shaped by content, systems, incentives and structures that were never designed with safety in mind. The question, then, is not whether risks exist. It is whether calls for a social media ban reflect a serious attempt to address those risks, or whether they signal something else entirely, a growing lack of confidence in the Online Safety Act 2023 before it has even been fully implemented.
Put simply, is a ban the big red button on the Online Safety Act 2023?
The promise of the Online Safety Act 2023
When the Online Safety Act 2023 became law, it was repeatedly described by government as world-leading legislation that would make the UK “the safest place in the world to be online”. Ministers emphasised its capacity to protect children through duties on platforms to address illegal and harmful content, supported by a new regulatory role for Ofcom.
That framing mattered. It set expectations that the risks children and adults face online would be tackled at a systemic level, through regulation, oversight and accountability. It also reassured parents, educators and civil society that responsibility for online safety would rest with platforms and the state, not with individual parents and guardians to grapple the consequences.
Yet that confidence was hard-won. During the Bill’s passage through Parliament, campaigners, particularly those working on tackling violence against women and girls, fought for gender-based harms to be recognised in the legislation. Explicit references to violence against women and girls were removed from earlier drafts of the legislation, only to reappear later in non-binding guidance rather than in the Act itself. The result is a framework in which women’s and girls’ experiences are acknowledged, but not enforceable, and where protections rely heavily on platform discretion. What is more, online abusive behaviours targeting girls and online harms suffered by girls specifically have been effectively erased from the regulatory narrative. Rather, where remaining protections in law exists, they cover children in general as a homogenous group.
This context matters when assessing current calls for a ban.
Why the political turn matters
Despite the Act’s stated ambition, political attention has increasingly shifted towards the idea of excluding under-16s from social media altogether. MPs have raised the issue in Parliament, and a petition calling for a minimum age of 16 has attracted significant support online. Media coverage has framed a ban as a proportionate response to growing evidence of harm, often invoking tragic cases and parental anxiety as justification. Campaigners have not been silent in this debate. Figures including Ian Russell, chair of the Molly Rose Foundation, have cautioned against treating bans as a solution, arguing instead for stronger enforcement of existing duties and meaningful reform of platform practices. The Online Safety Network has similarly warned that prohibition risks distracting from the work still required to make the Online Safety Act 2023 effective, setting out a ten-point plan focused on strengthening enforcement, clarifying duties and addressing gaps in protection.
What is striking is that the call for a ban is emerging alongside, not after, the Act’s implementation. Ofcom’s child safety duties are still being phased in. Codes of practice are still being developed. Enforcement powers have yet to be fully tested. In this context, reaching for a ban looks less like a considered escalation and more like an admission that the regulatory framework is already being treated as insufficient.
The problem with pressing the big red button
The Online Safety Act 2023 could never eliminate all risks existing in technology-mediated and online environments. It is a complex piece of legislation that relies on risk assessments, compliance with codes and guidance and regulatory oversight rather than absolute guarantees of safety. Introducing a social media ban at this stage would send a powerful signal that these mechanisms are not trusted to work. It would also signal to platforms that any further delays in taking the issue seriously is another minute too long.
This risks undermining confidence in a framework that Parliament invested years in developing, and that campaigners fought to improve, particularly for women and girls. It also shifts responsibility away from platforms and regulators and towards children themselves, suggesting that the only way to keep them safe is to remove them from digital spaces.
Safety by exclusion or safety by design
A ban, at its core (and literally), is a policy of exclusion. It aims to protect by denying access rather than by improving conditions. Likewise, we know that there is an issue with women’s safety in public spaces, and especially at night, that does not mean we would have a government strategy introducing curfew hours for women to keep them safe.
The harms policymakers are concerned about do not exist solely within social media platforms. They cut across messaging services, gaming environments and other online spaces that would sit beyond the reach of an age threshold.
If a playground were unsafe because of broken equipment or poor supervision, we would not respond by banning children from using it indefinitely. We would repair it, redesign it and ensure it could be used safely. Online spaces should be treated no differently. Excluding children without addressing the underlying conditions that produce harm leaves those conditions intact. It demonstrates a weakness in the Governments consideration of the issue in going after the sticking plaster, or even the low hanging fruit in order to resolve one of the key issues affecting young people today. It also poses a huge risk for young people who won’t have the opportunity to learn pro-social online behaviours as well as a wealth of other skills including practicing critical thinking. It further risks pushing young people into shadows of the internet that are less or unregulated all together, because a ban doesn’t mean young people won’t find digital spaces to connect in, as we are starting to see true for Australia’s teens.
This is where the big red button metaphor becomes instructive. Pressing it may look decisive, but it bypasses the harder task of fixing what is not working.
Strengthening the law we already have
There is no shortage of evidence that children encounter harmful content online at a young age, including sexual material, self-harm content and abuse. This evidence underpinned the case for the Online Safety Act 2023 in the first place. The problem is not a lack of awareness, but a lack of effective and informed delivery.
Rather than abandoning the framework, attention should be directed towards strengthening it. The Online Safety Network’s ten-point plan offers one route, focusing on enforcement, clarity and accountability. Campaigners have also called for more robust recognition of gendered harms, including statutory backing for protections addressing violence against women and girls online.
There is also room for more imaginative regulatory approaches. Extending age-rating systems to video content shared on platforms, for example, could shift responsibility onto companies to ensure age-appropriate access. Conversations about parental responsibility must move beyond expectation and blame, towards meaningful support that recognises the scale and complexity of the online environment, as well as much needed upskilling and safeguarding of adults in digital spaces too.