Australia implements world's strictest social media ban for under-16s

Australia bans social media for children under 16 starting December 10, 2025, affecting Facebook, Instagram, TikTok, YouTube, Reddit, and Kick, with platforms facing $49.5M fines.

Australia implements world's strictest social media ban for under-16s

Australia has implemented the world's most restrictive social media legislation, prohibiting anyone under 16 from creating or maintaining accounts on major platforms starting December 10, 2025. The law passed the Australian Parliament on November 29, 2024, establishing unprecedented restrictions without exemptions for existing users or parental consent.

Nine platforms have been identified as age-restricted social media services. Facebook, Instagram, Snapchat, Threads, TikTok, X (formerly Twitter), YouTube, Reddit, and Kick must take "reasonable steps" to prevent Australians under 16 from holding accounts or face civil penalties up to 150,000 penalty units—currently equivalent to $49.5 million.

Communications Minister Michelle Rowland announced the addition of Reddit and Kick to the restriction list on November 5, 2025, five weeks before enforcement begins. eSafety Commissioner Julie Inman Grant characterized the list as "dynamic," indicating additional platforms could be added based on feature changes or usage patterns. Platforms including Roblox, Discord, Steam, OpenAI's Sora, and Bluesky remain under assessment.

Legislative framework and enforcement timeline

The Online Safety Act 2021 amendments establish the legal foundation for age restrictions, creating obligations for platforms that meet four specific conditions. Services must have online social interaction as their sole or significant purpose, allow users to link with or interact with other users, permit material posting, and have content accessible to Australian end-users.

The Online Safety (Age-Restricted Social Media Platforms) Rules 2025, made by the Minister for Communications in July 2025, exclude eight classes of services from restrictions. Excluded categories include platforms with sole or primary purposes of enabling messaging, email, voice or video calling; online gaming; product or service information sharing; professional networking; education support; or health services support.

According to eSafety's assessment published November 5, 2025, Discord, GitHub, Google Classroom, LEGO Play, Messenger, Roblox, Steam and Steam Chat, WhatsApp, and YouTube Kids do not currently meet criteria for age-restricted platforms. These services fall within exclusion categories or lack the characteristics defining restricted platforms.

Prime Minister Anthony Albanese defended the legislation's complexity while acknowledging implementation challenges. "We don't argue that its implementation will be perfect, just like the alcohol ban for children under 18 doesn't mean that someone under 18 never has access—but we know that it's the right thing to do," Albanese stated November 29, 2024, following parliamentary passage.

The legislation specifically prohibits platforms from compelling Australians to provide government-issued identification or use Australian Government accredited digital ID services for age verification. Platforms may offer these as options but must provide reasonable alternatives. eSafety can seek penalties up to $49.5 million if platforms make Australians use government ID.

Technical implementation and age assurance methods

eSafety conducted consultations with online service providers, age assurance vendors, technical experts, international regulators, and civil society representatives between June and August 2025 to inform regulatory guidance development. The consultations, summarized in a document featuring 26 multi-stakeholder roundtables and one-on-one meetings, explored age assurance technologies, implementation considerations, impacts on users, circumvention risks, and communication strategies.

Online service providers shared examples of technologies currently used or under consideration for detecting, inferring, estimating, or verifying user age at signup and throughout the user lifecycle. Facial age estimation analyzes biometric data using artificial intelligence through user images and liveness checks. Age inference models analyze user behavior, content, and engagement patterns. Real-time AI detection tools flag potential age inconsistencies between stated age and apparent age on live streams and user-generated content.

Supporting measures identified by consultation participants include app store age ranges currently in development, to be shared via application programming interfaces as complementary signals confirming user age. Parental vouching was raised both as a platform method and app store API support, though some participants questioned reliability. Human moderation assesses flagged accounts and edge cases. ID verification typically serves as a fallback or appeal mechanism, with accepted documents in overseas jurisdictions including school cards, passports, and driver licenses.

Age assurance vendors participating in consultations highlighted implementation challenges. Narrow age thresholds—for example, plus or minus one year—make accurately detecting restricted ages difficult, especially for mid-teen age ranges due to developmental variability. Some participants suggested detecting an age threshold for 18-year-olds might prove easier due to more life milestones signaling age, such as credit cards, financial obligations, driver licenses, and voter registration.

Visual and behavioral signals used for age inference or estimation can be difficult to interpret and separate from other noise. An adult posting about children's television programming could be misidentified as under 16. Too much friction during processes can lead users to abandon or circumvent age assurance. Privacy-focused designs and ephemeral content features limit data available for real-time age detection and model tuning.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

Circumvention risks and mitigation strategies

Consultation participants identified multiple circumvention methods likely to be attempted. Using virtual private networks or proxies to hide a user's real location can be mitigated through IP address analysis combined with behavioral patterns and geolocation data, according to age assurance vendors. Sharing age-assured accounts with underage users—known as account "muling"—can be detected through standard security practices triggering checks when accounts are accessed through new devices or locations or when behavior patterns change.

Creating fake or mixed identities by combining real and false information to trick age assurance systems can be addressed through anti-spoofing measures and ongoing age inference models. Clearing browser cache or switching devices to reset age checks and make new attempts remains a persistent challenge. Answering knowledge-based questions using guessed or known information presents verification reliability concerns.

International regulators and government representatives who participated in consultations shared enforcement approaches including risk-based supervision prioritizing platforms with higher risk profiles or larger youth user bases. Evidence-based reviews combine platform-provided data with complaints, research, and user experience testing. Statutory information requests and third-party audits provide verification mechanisms. Challenges include information asymmetry between platforms and stakeholders, and lack of standardized metrics comparing effectiveness across platforms.

Several consultation participants expressed concerns about implementing age assurance systems under compressed timelines. Modifying existing systems or developing proprietary solutions involves multiple stages—design, development, testing, and deployment—each requiring careful planning and execution. Engaging third-party verification providers still requires companies to undertake procurement processes, system integration, and rigorous testing before rollout. Organizations of all sizes may depend on external vendors for age assurance technologies, and the capacity and scalability of these providers may not be tested, posing potential bottlenecks.

Impact on marketing and advertising professionals

The age restrictions carry significant implications for digital marketers operating in Australia. Australian digital advertising reached a record $17.2 billion in fiscal year 2025, with social media platforms demonstrating exceptional performance. Social video advertising spend increased 36.7% year-on-year to reach $1.9 billion, representing 38% of total video expenditure.

The under-16 restrictions will reduce addressable audiences on major platforms. Marketers targeting youth demographics must adapt strategies to reach audiences through excluded platforms or alternative channels. Age verification requirements may affect campaign setup, audience targeting capabilities, and measurement accuracy.

Platforms may implement additional restrictions on advertising targeting capabilities to comply with age verification obligations. Enhanced privacy protections for verified adult users could limit data collection and targeting precision. Marketers should anticipate potential impacts on lookalike audience modeling, custom audience uploads, and behavioral targeting based on engagement signals.

Similar regulatory developments globally demonstrate broader industry trends. The UK's Online Safety Act, which became fully operational July 25, 2025, required platforms to implement robust age verification systems. X platform implemented age assurance measures using facial age estimation or government ID verification, though widespread technical difficulties affected user access.

The European Union announced detailed technical specifications July 14, 2025, for an EU-wide age verification systemrequiring digital identity credentials for adult content access, with full implementation scheduled by 2026. The framework attempts balancing child protection objectives with data protection requirements through privacy-preserving technical architecture.

France introduced legislation in 2023 blocking social media access for children under 15 without parental consent, though research indicates almost half of users avoided the ban using VPNs. A law in Utah similar to Australia's was overturned by a federal judge who found it unconstitutional. Norway has pledged to follow Australia's approach, and the UK's technology secretary said a similar ban was "on the table" though he later added "not at the moment."

Concerns from civil society and youth advocates

Civil society organizations and academics who participated in eSafety consultations raised concerns about potential unintended consequences. Under-16s may migrate to less safe platforms. They may experience disconnection from support networks and reduced access to education, employment, and mental health support. LGBTIQ+ youth using social media to access sexual health information anonymously were cited as particularly affected populations.

Many support organizations rely on social media to reach young people and raised concerns about needing new outreach strategies. Participants recommended that eSafety and relevant stakeholders across government and non-government organizations monitor and take steps to address these issues.

Some participants cautioned against viewing age restrictions as a standalone solution. They called for broader systemic reform including certifying "safe digital spaces" with clear definitions, stronger industry accountability for user safety, industry adoption of Safety by Design principles, and ongoing evaluation and learning through independent reviews.

Youth advocates accused the government of not fully understanding the role social media plays in young people's lives and locking them out of the debate. "We understand we are vulnerable to the risks and negative impacts of social media, but we need to be involved in developing solutions," wrote the eSafety Youth Council, which advises the regulator.

Educational sector implications and preparations

Age restrictions may apply to platforms some schools currently use for educational purposes and to communicate with students and community members. eSafety has informed Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Kick, and Reddit of its view that they are age-restricted platforms and therefore required to prevent Australians under 16 from having accounts.

Learning management systems allowing educators to share course materials, manage assignments, and facilitate communication, and which allow students to access classroom resources, submit work, and collaborate with peers, will be excluded from age restrictions. While these services often integrate with other tools such as video conferencing, messaging, and content posting capabilities, if their sole or primary purpose is supporting user education, the exclusion applies.

Some learning management systems allow teachers to embed public video content from other platforms onto the system, such as YouTube videos. If content is publicly available and does not require students to log into another platform, students will still be able to watch this content without holding accounts.

Participants from eSafety's National Online Safety Education Council reported increasing demand for parent and carer-facing communication about the legislation. They identified challenges associated with cohorts of students comprising different ages—for example, Year 10 classes may include 14-, 15- and 16-year-olds to whom different rules apply. Educational institutions expressed concerns about rethinking and adjusting promotional practices for attracting prospective students, especially for those who have relied on social media channels.

Privacy protections and data handling requirements

The Social Media Minimum Age legislation builds on existing privacy protections contained in the Privacy Act. Platforms must ensure any personal information collected to verify a user is 16 or older is not used for other purposes without consent, including marketing. The Australian Government's Age Assurance Technology Trial confirmed a variety of methods provide effective age checks while preserving privacy.

Age-restricted platforms are expected to give users under 16 information about how they can download account information in a simple and seamless way prior to account deactivation or removal, or request access to information within a reasonable period after account deactivation. Information should be provided in formats that are easily accessible. Platforms should consider formats allowing end-users to transfer information and content to other services, or to upload information on the same platform if they sign up again after turning 16.

eSafety published regulatory guidance drawing on the Age Assurance Technology Trial as well as stakeholder consultations, including ongoing engagement with social media platforms likely to be restricted. The guidance draws on eSafety's existing knowledge base and includes principles consistent with similar international frameworks. The Office of the Australian Information Commissioner will provide guidance on privacy considerations.

Participants recommended regulatory reporting requirements should be clear to avoid potential over-collection of data due to uncertainty. They urged eSafety to support a data minimizing approach and provide clear guidance on what data platforms will need to retain for compliance purposes.

Platform responses and compliance preparations

TikTok, Snapchat, and Meta told federal parliament in October 2025 that while they disagree with the policy, they will comply with the ban when it takes effect December 10. YouTube has maintained its disagreement with inclusion in the ban but has not stated whether it will comply. The company previously indicated it may launch legal action but has not embarked on that course.

Elon Musk's X platform expressed opposition to the ban and has not said whether it will comply with the law, amid several ongoing legal disputes with the eSafety Commissioner in recent years. Reddit and Kick, added to the restriction list November 5, 2025, have not yet publicly commented on compliance intentions.

Minister Rowland emphasized platforms received adequate notice regardless of when they were formally identified. "These platforms, it is their duty every single day to consider whether they are going to be caught up in the law and whether they themselves, as a social media platform, will be required," Rowland stated November 5, 2025. "If they have not given thought to this up until today, that is nobody's business but theirs. They've had twelve months' notice."

Some consultation participants shared examples of integrating age assurance methods into existing systems. Some examples demonstrated implementation could happen within hours or weeks, but this may depend on the platform's testing requirements, contract timeframes, and the readiness or complexity of existing reporting and appeal processes.

Timeline

  • November 29, 2024: Australian Parliament approves social media age restrictions with bipartisan support, setting minimum age at 16 without exemptions for existing users or parental consent.
  • June-August 2025: eSafety Commissioner conducts 26 multi-stakeholder consultation roundtables and one-on-one meetings with online service providers, age assurance vendors, technical experts, international regulators, civil society organizations, and educators.
  • July 2025: Minister for Communications makes Online Safety (Age-Restricted Social Media Platforms) Rules 2025, establishing eight classes of excluded services.
  • September 26, 2025: eSafety publishes regulatory guidance for platforms on implementing age assurance measures while preserving privacy and complying with the Online Safety Act.
  • November 5, 2025: eSafety informs nine platforms they are age-restricted under the legislation—Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, and Kick. Commissioner characterizes list as "dynamic" with additional platforms potentially added.
  • December 10, 2025: Age restrictions take effect. Platforms must have implemented reasonable steps to prevent under-16s from creating or maintaining accounts or face penalties up to $49.5 million.

Summary

Who: Australia's eSafety Commissioner, operating under legislation passed by the Australian Parliament, regulates social media platforms including Meta (Facebook, Instagram, Threads), Snap (Snapchat), ByteDance (TikTok), X Corp (X/Twitter), Google (YouTube), Reddit, and Kick. The restrictions affect all Australians under 16.

What: Age-restricted social media platforms must implement reasonable steps preventing anyone under 16 from creating or maintaining accounts. This includes finding and deactivating existing underage accounts, preventing new account creation by underage users, implementing age verification or assurance technologies, establishing appeal processes for mistaken restrictions, and preventing circumvention attempts including VPN usage.

When: The restrictions passed Parliament November 29, 2024, and take effect December 10, 2025. Platforms have 35 days from the November 5 announcement of Reddit and Kick additions to comply with requirements. eSafety will monitor compliance immediately upon the December 10 effective date, taking a proportionate and risk-based approach initially focusing on services with the greatest number of end-users where there are higher risks of harm.

Where: The restrictions apply to Australia-resident users under 16 accessing age-restricted platforms. Platforms must determine whether users are ordinary residents of Australia through location signals including IP addresses, GPS or location services, device language and time settings, device identifiers, Australian phone numbers, app store or operating system account settings, and photos, tags, connections, engagement, or activity indicating Australian residence.

Why: The Australian Government justifies restrictions as protecting young people from harms associated with age-restricted social media platform use. Being logged into accounts increases likelihood of exposure to pressures and risks difficult to deal with, including platform design features encouraging excessive screen time, notifications and alerts linked to reduced sleep and attention and increased stress levels, and over-exposure to harmful content impacting immediate and long-term health and wellbeing. The delay until 16 provides time for young people to develop digital literacy, critical reasoning, impulse control, and greater resilience before facing age-restricted platform risks.