“The First Domino”: Australia Tests an Age-Based Social Media Ban
Early enforcement offers a preview of what works, what breaks, and what moves elsewhere
On December 10, Australia’s long-anticipated Online Safety Amendment (Social Media Minimum Age) Act came into effect, marking the world’s first blanket social media ban for children under 16. Australian eSafety Commissioner Julie Inman Grant called this law “the first domino” — a defining moment in child safety that is expected to be followed by several other global countries. (For more details on other national proposals, see our Digital Childhood Briefing from last week.)
Since its passage in parliament at the end of 2024, the Australian law has drawn both loud applause and sharp criticism. Some argue it will help restore healthier childhoods and protect youth from online harms ranging from addiction to grooming; others frame it as an overreach threatening children’s digital rights to information and communication. Across the past year, two questions continue to prevail: How will the law be implemented? And second, will it actually work?
Age assurance sits at the center of both questions — an area Vys has worked in extensively with our client partnerships, along with the publication of our practical deployment handbook. As regulation like the SMMA continues to accelerate globally, we’re committed to helping you and your teams make sense of what’s changing and what may come next. To that end, this piece breaks down the latest on the Australian SMMA rule: where implementation stands, how stakeholders are responding, and what it all means for the future of age assurance.
This issue will cover:
The basics: What the SMMA actually requires, which services are covered, and where regulators have left room for interpretation
The implementation: Early signals on age-assurance approaches, technical tradeoffs, and enforcement realities
The reactions: What platforms, regulators, parents, youth, and civil society are saying — and what these responses reveal about the law’s feasibility, legitimacy, and likely ripple effects across the globe
The basics: What SMMA requires
The Social Media Minimum Age rule requires social media companies, as defined by the law, to “take reasonable steps to prevent Australians under the age of 16 from creating or keeping an account.” Put simply, designated platforms must implement some form of age assurance, whether that be ID-based verification, facial estimation, or behavioral signals. While young users might still be able to view content on certain services, they are essentially blocked from posting, commenting, and interacting with other platform users.
Critically, the law is principles-based. In September 2025, the eSafety Commissioner published detailed regulatory guidance emphasizing that there is no one-size-fits-all approach for compliance. Platforms are not expected to verify the ages of every user or meet a certain accuracy threshold. Instead, they must be able to demonstrate that they took reasonable steps towards compliance, depending on their size, technical capacity, and risk profile.
Much of this guidance was shaped by the Age Assurance Technology Trial — the $6.5 million trial led by the Age Check Certification Scheme (ACCS) and that tested 60 technologies from 48 different vendors. (We wrote about the trial’s findings back in September, and facilitated a panel discussion at the ASEAN ICT Forum last month with ACCS and eSafety that dove into the complexity of this trial in more detail.)
In October 2025, the eSafety Commissioner also released a self-assessment tool for providers to determine whether their services fall under the SMMA rule. As of launch, the ban applies to ten platforms: Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, X, Kick, Reddit, and Twitch.
The implementation: What rollout of the SMMA looked like
When the law came into force on December 10, its rollout was characterized as “mostly smooth” — not bad for a major regulatory experiment. A few key points on the rollout:
The ban was not immediately effective for all underage accounts. Rather than focusing on punitive enforcement at this stage, eSafety has acknowledged that some companies were behind on implementation, specifically citing TikTok, X, and Reddit. The Commissioner has requested user numbers from each platform, positioning transparency as a key accountability mechanism: a way to confirm the removal of underage accounts and assess whether those users are effectively prevented from creating new ones.
Despite being out of scope for the initial ban, Bluesky voluntarily implemented age checks, working with the vendor Kids Web Services to deliver age assurance.
Reddit complied with the ban but filed a lawsuit on speech grounds, arguing that it should also be exempt since it did not meet the law’s definition of social media.
As expected, news reports have already come out detailing how young users are circumventing the ban, from asking older family members to pass age checks for them or spoofing photos and videos of AI-generated adults. Reported tactics have largely mirrored common strategies in other age-gate rollouts, including the UK Online Safety Act. These include:
Using VPNs to appear outside Australia, since VPN use remains legal
Staying on platforms that are not on the official SMMA list
Using platforms that had a greater chance of letting under-16s through, like Snap, according to this interview with teens
Finally, a new set of smaller social platforms shot to the top of Apple’s App Store rankings in Australia. ByteDance’s Lemon8 climbed to become the most-downloaded free app in the country, followed closely by private, friend-focused photo-sharing service Yope, as well as Coverstar, which markets itself as a “safest alternative” to TikTok for younger users. Sensor Tower data showed these alternatives climbing well outside the top ten for non-banned apps, while other niche platforms like Bluesky, Yubo, and Wizz also gained traction.
Perspectives from different stakeholders
The tenor of these reported incidents closely mirrors the debate that surrounded the Online Safety Act’s rollout in July, with platforms and policymakers largely holding their established positions on age verification.
After Snap’s facial age estimation system incorrectly allowed a 15-year-old to pass, a company spokesperson framed the incident as evidence that “there are better solutions to age verification that can be implemented at the primary points of entry, such as the operating system (OS), device, or app store levels.”
On the government side, responses have emphasized persistence over perfection. Addressing reports of circumvention, eSafety Commissioner Julie Inman Grant noted that “these isolated cases of teenage creativity, circumvention… and other ingenious ways that people will push boundaries will continue to fill newspaper pages.” She added that regulators “won’t be deterred — we’re playing the long game.”
The Australian public: the split between adults vs. youth
Perspectives on Australia’s SMMA span a wide spectrum. Across party lines, the Australian adult public sees SMMA as a pragmatic harm-reduction measure, though most recognize it is not a silver bullet. A recent poll run by Monash University found that about 80% of Australian adults supported the law, with key concerns centering the potential for influence and manipulation of young people during key developmental years. However, skepticism remains: a survey of Australian adults by The Courier Mail found that roughly 3 out of 4 respondents believe “kids will always find a way around restrictions like these.”
Among young people themselves, the story looks quite different. In a poll led by the Australian Broadcasting Company, 70% of youth respondents said the ban was not a good idea — and 72% believed it would not be effective. Featured responses highlighted concerns for how students with neurodivergency, mental health issues, and loneliness would be affected by the ban, along with questions as to why a ban was prioritized over digital literacy education. Similar views were echoed across a recent teen survey led by the Digital Media Research Centre: young people want social media to be improved, but do not see blanket age restrictions as the answer.
Social media companies: from opposition to compliance
Social media companies have long articulated their objections to blanket bans: concerns over data privacy and security, technical feasibility, free expression and access to information, and potential harm to vulnerable communities. However, in response to the Australian law coming to fruition, companies have generally shifted from a tone of outright opposition to reluctant compliance. (The former is perhaps most aptly captured by X CEO Elon Musk’s response to the SMMA’s passage: “Seems like a backdoor way to control access to the Internet by all Australians.”)
In the lead-up to December 10, affected platforms issued statements and user-facing notices signaling intention to comply with Australia’s SMMA, though notably without endorsing the law itself — and often paired with pointed critiques. For example, Meta’s blog post affirmed that it shared the eSafety Commissioner’s goal of providing safe, age-appropriate experiences for youth — but that it believed the SMMA would “restrict teens from these benefits, and will result in inconsistent protections across the many apps they use.”
TikTok similarly focused blame for the changes at the Australian government in user-facing messaging: “We understand that these changes may be upsetting, but they are necessary to ensure that TikTok complies with Australian law.” And X (formerly Twitter), the last platform to announce their compliance plan, was even more blunt: “It’s not our choice – it’s what the Australian law requires.”
More broadly, when it comes to age assurance as a concept, many platforms continue to argue that it shouldn’t be their burden alone. Many — like Meta, Snapchat, and YouTube (by way of Google) — have repeatedly suggested that age assurance ought to be enforced at the app store or device-level. Others, such as X and Reddit, have posited that age verification can chill free speech and political communication, which are fundamental to their public discussion-driven models.
Indeed, Reddit had initially argued that the “legally erroneous” and “arbitrary” law should not apply to their platform, claiming it functions primarily as an information-sharing platform rather than a social media platform as defined in the law. As described earlier, after agreeing to comply, Reddit subsequently filed a lawsuit against the Australian government — and claimed the law violates young users’ rights to political communication.
Likewise, two Australian teenagers have sued the Australian government, alleging that the ban would infringe upon minors’ constitutional rights to free communication, information, and association. Their case has been backed by the Digital Freedom Project (DFP), an advocacy group led by an Australian parliamentarian.
Human rights organizations: the law as a double-edged sword (or just a sword)
Other human rights organizations have also spoken up about the risks of Australia’s SMMA, including UNICEF and Amnesty International. Both groups have individually issued statements cautioning that the ban may undermine some children’s digital rights — and that it cannot serve as a substitute for meaningful reform of the technologies themselves, whether across platform design and content moderation. These organizations may have been speaking to Australia, but the breadth of their influence suggests they may have also been speaking to the next set of dominos set to fall in other countries, encouraging more thoughtful implementation. At the more critical end of the spectrum, the Electronic Frontier Foundation has responded to the law by launching an Age Verification Resource Hub to help combat what they argue are “systems of surveillance, censorship, and exclusion.”
Implications for the future of age assurance and age-based restrictions
The enforcement of Australia’s SMMA signals a pivotal moment in online child safety regulation. If social media for young people has been a global experiment, then this law turns Australia into a live, national test case for age-based restrictions. Many countries and jurisdictions have already been quick to follow by readying their own social media age minimums, including Denmark, Norway, Malaysia, Brazil, and the EU.
However, the widespread rollout of age-based restrictions on social media has already increased the call for age-appropriate scaffolding rather than outright bans. Teens and international organizations are increasingly arguing that platforms should be held accountable for removing harmful content or features, instead of banning teens outright from important forms of technology.
In the coming months, it will be worth watching for the following:
How will the eSafety Commissioner apply and enforce the “reasonable steps” standard over time?
What will constitute evidence of the SMMA’s effectiveness?
Will the pending legal challenges shape the scope of the SMMA, or how future age assurance / age restriction legislation is written?
How will pushback to SMMA and other similar laws globally influence the pressure on platforms to invest in age-appropriate design — rather than relying on bans to keep teens safe?
Just as we emphasized in our analysis of the UK Online Safety Act, these regulatory developments confirm that age assurance is here to stay. The next big question for the industry then stands: What should companies do once they have this age data? As we move forward, companies that proactively invest in scalable, adaptable approaches will be best positioned to successfully address this query — and navigate these regulatory waves.
Resources
Social Media Age Restrictions Hub (Australian eSafety Commissioner)
Regulatory Guidance (Australian eSafety Commissioner)
Platform Self-Assessment Tool (Australian eSafety Commissioner)
Age Assurance Technology Trial Main Report (Age Assurance Technology Trial)
Age Verification Methods Explainer (Age Verification Providers Association)
