Australia’s Teen Social Media Ban: Why 70% of Kids Are Still Scrolling
The government celebrated millions of accounts being removed. But a closer look at the first official report reveals a far more uncomfortable truth — the ban isn’t working the way anyone hoped.
When Australia introduced one of the world’s toughest social media restrictions late last year — barring anyone under the age of 16 from holding an account on major platforms — it made international headlines. Governments around the world watched closely. Parents celebrated. Critics worried. And experts predicted, almost to a person, that it simply would not hold.
Three months in, Australia’s eSafety Commissioner has released its first major assessment of how the ban is performing. The headline numbers sound impressive. Dig a little deeper, and a very different picture emerges.
What the Report Actually Says
The eSafety Commissioner’s office confirmed that roughly 4.7 million social media accounts belonging to children under 16 have been removed, deactivated, or restricted since the laws took effect in December. An additional 310,000-plus young users were blocked from accessing platforms altogether. For a country of around 28 million people, those are numbers that any regulator would feel proud of announcing.
Except the same report also quietly acknowledges a critical limitation: the 4.7 million figure counts account actions, not actual users who have lost access. In other words, a child whose account was removed may have simply created a new one the following day — and the numbers would not reflect that.
“The figure reflects the number of age-restricted accounts removed or restricted — not the number of users who have lost access to one or more platforms.”
— eSafety Commissioner’s Report, 2025The more telling data came from a survey of 898 parents and carers of children aged 8 to 15. When asked whether their child was still using social media apps, 70% said yes. That single number undercuts almost every triumphant claim made about the ban’s effectiveness.
Platforms Aren’t Asking Kids to Prove Their Age
For any age-restriction system to work, platforms must actually enforce it. And on that front, the results are just as worrying. Among parents surveyed whose child still had access to at least one social media platform, nearly 67% said the app in question had never asked their child to verify their age at all.
Key Enforcement Gaps Identified
- No age verification requested in the majority of still-active cases
- Children using workarounds such as new accounts or false birth dates
- No measurable drop in harm reports filed with eSafety
- Platform compliance remains inconsistent across major apps
- Regulations include interpretation leeway that complicates legal action
Perhaps most damning of all: the report found that the volume of harm reports made to eSafety by children under 16 has shown no discernible drop since the ban came into effect. If the goal was to shield young people from online harm — bullying, inappropriate content, predatory behaviour — the evidence suggests it has not achieved that goal, at least not yet.
Why Teens Are So Hard to Lock Out of Social Media
Digital natives have always been quicker to find workarounds than regulators are to close them. A child who wants to stay on Instagram can enter a false birth year during sign-up. They can use a parent’s account. They can access platforms through a browser rather than an app, bypassing some age-gating features entirely. The ban was never going to be a watertight seal — and the data confirms it isn’t.
But there’s a deeper point here. For teenagers in 2025, social media is not a luxury or a distraction. It is where friendships are maintained, news is consumed, communities are formed, and social capital is built. Asking a 15-year-old to simply walk away from it is roughly equivalent to asking them to stop speaking to their friends entirely. The digital and physical social worlds are not separate any more — they are the same world.
Online interaction is not a phase teenagers will grow out of. It is a fundamental part of how the current generation connects, learns, and builds identity.
What Experts Have Been Saying All Along
The experts who warned against blanket bans did not do so because they wanted children exposed to harm. They did so because they understood something important: prohibition rarely works when demand is this strong and the barriers to circumvention are this low. What they advocated for instead — and continue to advocate for — is meaningful digital literacy education. Teaching young people how to navigate online spaces safely, critically, and with a healthy sense of self.
A ban, by contrast, removes the conversation. It tells a teenager that the internet is something to be protected from, rather than something to engage with wisely. And when that teenager inevitably gets back online — which, as we now know, 70% of them already have — they do so without the tools or the language to handle what they find.
What Happens Next
The eSafety Commissioner’s office has not given up. In response to the report’s findings, the regulator said it plans to sharpen its focus on platform compliance, with firmer enforcement action expected by mid-year. Social media companies that fall short of the standards set out in the regulations could face financial penalties.
There is, however, a complicating factor. The regulations themselves allow for some degree of leeway in interpretation, which makes it harder to pursue legal remedies when a platform falls short. The eSafety office has acknowledged this and says it intends to push through regardless.
For parents, the report is a reminder that legislation alone cannot substitute for conversation. Knowing what platforms your child uses, understanding what they are seeing there, and talking openly about online life may be less dramatic than a government-level ban — but it is considerably more effective.
The Global Picture
Australia is not alone in wrestling with this question. Governments from the United Kingdom to several US states have introduced or are actively considering similar restrictions. Each is watching Australia’s experience carefully. If the world’s first major teen social media ban has struggled to keep even a third of young users offline, that is a signal that future policies need to be built on a far more sophisticated foundation than an age threshold alone.
The conversation about children and social media is far from over. But it is becoming clearer that the answer lies in smarter design, transparent enforcement, and education — not just a wall that almost every determined teenager can climb over in under five minutes.
Frequently Asked Questions
Australia’s law, which took effect in December 2024, bans children under the age of 16 from creating or maintaining accounts on major social media platforms. Platforms found to be non-compliant face potential financial penalties. The ban applies to the country’s most widely used social apps but allows for some regulatory discretion in enforcement.
The main reason is the lack of robust age verification by the platforms themselves. Nearly 67% of parents whose child still had access said the platform never asked for age verification. Teens also use workarounds such as new accounts with false birthdates, a parent’s account, or browser-based access where app-level restrictions don’t apply.
Not necessarily. The eSafety Commissioner’s own report clarifies that the 4.7 million figure reflects account actions — removals, restrictions, or deactivations — rather than individual users who have permanently lost access. A child whose account was removed may have already created a new one, making the headline number misleading as a real-world measure.
According to the report, no. Harm reports filed with eSafety by children under 16 have shown no discernible decrease since the restrictions were introduced. Those who remain online — or who found their way back — continue to encounter harmful content and experiences at roughly the same rate as before.
Most digital literacy experts argue that education is more effective than prohibition — teaching children to critically evaluate content, recognise manipulation, manage screen time, and protect their privacy. A young person with these skills is far safer online than one who is banned but circumvents restrictions without any guidance.
Yes. The Australian regulations include provisions for financial penalties against platforms that fall short of compliance standards. However, the rules also allow for some flexibility in interpretation, which the eSafety Commissioner acknowledges could make legal action challenging. The office says it intends to move toward firmer enforcement by mid-2025.
Several countries are watching Australia’s experiment closely. The United Kingdom, parts of Europe, and a number of US states have introduced or are actively considering age-based restrictions on social media for minors. Australia’s results — both wins and shortfalls — will likely shape how these policies are designed and enforced globally.
The most effective approach is an ongoing, open conversation about social media — what your child sees, who they interact with, and how it makes them feel. Parental control tools on devices and routers can help, though they are not foolproof. Children who feel they can talk to a trusted adult are far more likely to seek help when something goes wrong online.
