Australia Banned Kids From Social Media. The Kids Are Winning.
Two months into the world's first under-16 social media ban, 90% of teens say they never lost access. Six countries are copying the homework anyway.
An 11-year-old fooled Australia's facial recognition system by drawing on a fake moustache with eyeliner. A pet dog passed as over 16. And a 14-year-old named Adyan used his friend's driver's licence to unlock Instagram in under a minute.
Australia's social media ban for under-16s — the first of its kind on Earth — went live on December 10. Two months later, the teenagers it was built to protect are treating it like a game they've already beaten.
The Numbers Look Great. The Reality Doesn't.
The Australian government says 4.7 million accounts have been deactivated since the ban started. That sounds like a victory. It isn't.
Researchers point out that many teens had multiple accounts across platforms. The real number of kids affected is far lower. More importantly, the figure says nothing about where those displaced teens went next.
ABC Australia interviewed teenagers across the country. Their estimate: about 10% actually got banned. Half of those got back on within days.
"It's completely useless," Adyan told ABC. He still has Snapchat, TikTok, and Instagram. The facial scan on Snapchat thought he was over 16. Instagram needed ID — so he borrowed a friend's.
A 15-year-old named Alby described a new cottage economy: kids paying older friends to do ID scans for them. "It's really easy to get around," he said. "I personally haven't even gotten a message saying 'you're banned.'"
The Technology Is a Mess
The law is deliberately "tech-neutral." Platforms can pick whatever age verification method they want, as long as they can call it "reasonable steps." So they picked cheap ones.
Meta used selfie-based facial age estimation. TikTok used existing account data. Snapchat asked users to declare their age. The Australian Strategic Policy Institute (ASPI) called the result "inexperienced bouncers checking IDs at a nightclub."
The problems showed up immediately. Eleven-year-olds were identified as 30. Sixteen-year-olds — old enough to use social media legally — got locked out. The government's own Age Assurance Technology Trial found that no single verification method worked reliably across all platforms.
Meanwhile, VPN downloads surged before the ban even started. Fringe apps like Yope and Lemon8 saw downloads jump by up to 251%. And platforms exempted from the ban — Discord, Roblox, Steam — quietly became the new teenage gathering spots, with all the unmoderated risks the ban was supposed to fix.
The Collateral Damage Nobody Planned For
Here's the part that doesn't make the victory speeches.
Amanda Lennestaal, a single mother in Sydney, has teenage children with disabilities. She told ABC the ban "removed a space of authentic connection." For her kids, online spaces were the most accessible social environments they had — no physical barriers, no sensory overload, no communication gaps.
Peter De Waard's two sons run a band called the Wave Raiders. His 15-year-old managed their social media accounts. Now Peter does it himself, full-time. "It's an absolute nightmare," he said. "For some kids, banning social media is absolutely the right measure. For others, it's way more harmful than positive."
The ban makes no exceptions for parental consent. You can't opt your kid in, no matter the circumstances.
Six Countries Are Copying the Homework Anyway
Australia's messy experiment hasn't slowed anyone down. If anything, it's accelerated a global wave.
Malaysia banned under-16s from social media in January. Spain announced its own ban in February. France, Greece, Denmark, and the UK are all working on restrictions. The UK House of Lords added a social media ban amendment to a children's wellbeing bill this month.
CNBC spoke with Ravi Iyer, policy advisor at Jonathan Haidt's Anxious Generation movement, who called a US version "inevitable."
The catalyst isn't just youth mental health data. It's the Grok scandal. Between December 2025 and January 2026, Musk's AI chatbot generated roughly 3 million sexualised images — including an estimated 23,000 that appeared to depict children. The UK's ICO opened a formal investigation. Ireland's Data Protection Commission ordered X to preserve all Grok-related data. A class action lawsuit has been filed in the US.
When an AI can generate 23,000 images of children in two weeks and the platform hosting it does nothing until regulators intervene, the "let the market self-regulate" argument loses its last defenders.
The Uncomfortable Question
Australia's ban exposes a tension that every country following it will face. The problem is real. Kids' mental health data is grim. A 2018 study of half a million adolescents found depressive symptoms rose in lockstep with social media use. The platforms know it — Meta's own internal research showed Instagram made body image issues worse for one in three teen girls.
But the solution isn't working. Not because the intent is wrong. Because the execution assumes you can build a wall around the internet and expect teenagers not to climb it.
ASPI put it bluntly: part of the problem is that the government treated this as a policy achievement rather than a technology challenge. They legislated the destination without mapping the road.
The kids who most needed protecting — those in abusive homes, those with disabilities who rely on online community, those in rural areas with no offline alternatives — are either still exposed on unmoderated fringe platforms or cut off from their only social lifeline.
Holly Grosshans at Common Sense Media argues the answer isn't banning kids from the internet. It's forcing platforms to stop designing products that exploit developing brains. Age-appropriate design codes — which California, the EU, and the UK are all experimenting with — put the burden on companies to make their products safe by default, rather than asking parents and governments to police every login.
What Happens Next
Australia will spend 2026 learning whether its ban creates the safety outcomes it promised or just displaces harm into harder-to-reach corners. Every country watching will draw its own conclusions.
The early evidence isn't encouraging. A 14-year-old with eyeliner and a friend's driver's licence shouldn't be able to defeat a $49.5 million-per-violation law. But here we are.
The question isn't whether kids should be safer online. Everyone agrees on that. The question is whether banning them from the room works better than fixing the room.
Two months in, the room's still broken. And the kids never left.
Keep Reading
40% of Your Kid's YouTube Shorts Are AI-Generated. Nobody Told the Parents.
A NYT investigation found YouTube's algorithm floods children with bizarre, nonsensical AI videos. Experts say it could rewire how young brains learn.
A 20-Year-Old Is About to Tell a Jury What Instagram Did to Her Brain
The first plaintiff in the landmark social media addiction trial takes the stand. Here's why this case matters for everyone.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.