The benefits and flaws of child social media bans

Whether to legislate against Under-16s accessing a big part of contemporary society is a complex question involving law, technology, privacy, rights, and the nature of a child’s development

Scarlett Yang

The benefits and flaws of child social media bans

The debate over whether to ban Under-16s from using social media has returned to the fore after Britain’s House of Lords voted in favour of prohibition. The upper house may not be the most social media savvy institution—most of the 800+ Lords are over the age of 70, dozens are over the age of 90, and only 3.7% are under the age of 50. Still, their votes matter, and they voted in favour of a legislative amendment to prevent those aged 15 or younger from accessing the platforms.

The British government has been consulting widely on children’s relationship with the digital space, and there is some evidence of there being cross-party support for a ban, which could reflect a broader shift in mood both in the UK and beyond. Increasingly, legislators are having to deal with big, era-defining questions. How should children’s presence in the digital space be regulated, if at all? What are the limits of legal protection? These are questions that experts are trying to answer. Social media is now an inseparable part of everyday life, but early teenage years are formative, and children are uniquely vulnerable.

Legal responsibility

A social media ban for Under-16s means a set of measures aimed at preventing minors from creating or maintaining accounts. It does not imply criminalising the child or their family. Rather, legal responsibility is typically placed on the companies operating the platforms, obliging them to verify users’ ages and to prevent access by those deemed too young under the law.

The type of ban voted for by Britain’s Lords differs from the concept of digital consent related to the processing of personal data (GDPR legislation), which refers to the legal age at which a child is considered capable of consenting to the collection and processing of their personal data when using digital services without parental approval. A social media ban, by contrast, entails preventing access altogether, not merely regulating its conditions or data requirements.

The idea of banning minors from social media has gathered pace after platforms’ voluntary policies increasingly failed to limit children’s access effectively. The first legally-binding implementation of an age-based ban was recently enacted in Australia. Sites now banned for children include Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, and streaming platforms Kick and Twitch. WhatsApp, YouTube Kids, and Google Classroom have been excluded.

REUTERS/Hollie Adams
Social media apps on a phone before the law banning them for those under 16 came into effect in Australia, on 9 December 2025.

Countries such as Malaysia and New Zealand seem set to follow suit, which may confirm the move from ‘safe use’ to ‘age-based ineligibility.’ In Europe, legislation has been discussed. The European Parliament debated the issue and issued non-binding positions calling for the establishment of a unified minimum age, but implementation is up to member states. In France, an age-based ban has been proposed, while others have opted for a regulatory framework centred on data protection or age-appropriate design.

Some argue that bans act more as symbolic measures, without addressing the root causes of digital violence or exploitation

Age assurance

Implementing an age-based ban requires age assurance—a set of tools that verify a user's age before granting access or account creation. Age verification is not seen as either a technical measure or a regulatory instrument. Methods vary, from verifying official documents or digital identity systems, to approaches based on age estimation or biometric data, as well as indirect mechanisms linked to the device used, account behaviour, or usage patterns.

Some worry that the verification processes themselves are a new source of digital rights violations. There is a fine line between legal compliance on the one hand, and protecting privacy and minimising personal data collection on the other. Legally, compliance is assessed based on the mechanism's effectiveness , including its real-world capacity to prevent unauthorised access, and how difficult it is to circumvent.

REUTERS/Hollie Adams
Ni Wang (14) and Iris Tolson (15) use their phones before the ban comes into effect in Sydney, on 22 November 2025.

Laws that impose age-based bans typically require platforms to take "effective steps" to enact that—a standard that allows some discretion but places a clear burden on companies to demonstrate that their systems work. Yet questions remain about the limits of responsibility and legal liability when minors bypass safeguards to access a banned platform. In most cases, legal responsibility still lies with the platform operator as the entity with the capacity to prevent access.

Pros and cons

Proponents of a social media ban for Under-16s say this is about protecting minors from online harms such as digital addiction, cyberbullying, commercial exploitation targeting minors, and exposure to age-inappropriate content. Expecting parents to regulate their children's technology use is no longer realistic, they say, given that social media platforms are designed to capture attention and extend usage time, all of which can have long-term effects.

Those who oppose a ban point to privacy risks inherent in age-verification systems that collect sensitive or biometric data. Beyond that, bans can lead to digital exclusion, they say, especially for children who go online for learning, psycho-social support, and communication, sometimes to combat isolation or vulnerability. Opponents also point to the relative ease with which verification checks can be beaten.

Some critics say the most dangerous aspect of an age-based social media ban lies in the normalisation of digital surveillance under the banner of child protection, which amounts to creeping state or corporate intervention in individuals' digital lives. Yet regardless of one's own position, the nature of the debate shows that social media bans for children cannot be reduced to a purely technical or regulatory matter. They involve complex ethical, legal, and social choices that require a delicate balance between protecting children and safeguarding digital rights.

A composite model

Britain is of interest in the global debate because it is building a multi-layered regulatory framework, rather than enacting a single, comprehensive law. This reflects an understanding of digital risks and illustrates how the regulation of digital space has evolved from a data-centric focus to one of child protection.

REUTERS/Hollie Adams
A phone displays a message from TikTok as the social media ban for those under 16 comes into effect in Sydney, on 10 December 2025.

The British legal framework has three interconnected layers. The first is the digital age of consent, set by UK law at 13, which grants children of that age the legal capacity to consent to the processing of their personal data. This does not confer an unconditional right to use social media and is confined to issues of data and privacy.

The second layer is the Children's (Age-Appropriate Design) Code, which obliges digital platforms to design their services in ways that prioritise the best interests of minors, including default settings, enhanced privacy protections, and the reduction of design practices that may cause psychological or behavioural harm. This shifts the focus to platforms' responsibility to create a less harmful digital environment for children.

The third layer is the Online Safety Act. This imposes strict obligations on platforms to assess the risks of children accessing their content and services, and makes age verification a central regulatory tool for controlling access.

Those who favour bans say children are fragile and incapable of discerning danger in a platform or algorithm

Last month, Britain's House of Lords voted 261 to 150 in favour of an amendment to the government's school bill that would ban social media use for Under-16s. Although it was led by opposition and crossbench peers, plenty from the Labour Party also back it. The vote does not mean that the ban is now law—the Lords' amendment must still pass through the House of Commons. Still, it puts ministers in a bind.  

More broadly, the gathering momentum behind social media bans for children reflects a deeper crisis in society's relationship with childhood, time, and legal authority in the digital space. Laws not only regulate behaviour but also define who a child is, when they are considered capable, and what they are to be protected from. For many worried parents, the digital realm can disrupt traditional age-based development.

Social media exposes children to experiences, language, and images historically confined to the adult world. But is exclusion and prohibition the answer? Is the child an entity to be isolated from the world until maturity, or an emerging self already living within an interconnected reality that cannot be technically disentangled?

Critics argue that today's child is structurally embedded in—and shaped by—the digital realm, whether they have a personal account or not

Those who favour bans say children are fragile and incapable of discerning danger in a platform or algorithm, yet others argue that today's child is structurally embedded in—and shaped by—the digital realm, whether they have a personal account or not. As such, bans can act more as symbolic measures, without addressing the root causes of digital violence or exploitation. Others still say the argument cannot be simplified into a defence of absolute freedom for children, who can be protected in different ways.

Technology moves quickly, but laws enacted in haste—sometimes in response to moral panic or media pressure—tend to offer binary solutions (such as bans), when in fact long-term structural approaches are needed. These include rethinking platform design, questioning attention-based digital economies, and empowering children through knowledge and critical capacity.

Social media bans for children strike at the heart of what it means to be a child, and the relationship being forged between law, technology, and the emerging self. The issue is not solely about restricting access, but about determining who has the right to engage (and likely err) within a space now woven into our social and political reality. No doubt the debate will rumble on, whether online of off.

font change