Starting today, Australia will become the first country to enforce a minimum age for social media use, requiring platforms such as Instagram, YouTube and Snap to block more than a million accounts of users below the age of 16. The Australian legislation has drawn criticism from tech companies, but support from several parents in the country. It is likely to set a template for a broader global push to tighten regulation of young users’ online safety.
The ‘Online Safety Amendment (Social Media Minimum Age) Act’ states that age-restricted platforms will be expected to take “reasonable” steps to find existing accounts held by those under 16 years of age. They must deactivate or remove these accounts, prevent them from opening new accounts, including prohibiting any workarounds that may allow under-16s to bypass the restrictions, it states. Platforms are required to ensure no account is removed unfairly if someone is mistakenly missed by or included in the restrictions, so no one’s account is removed unfairly.
The regulation has left Big Tech scrambling. All of them have publicly opposed the law, while maintaining that they will comply with it. Local reports suggest Meta has already started deactivating accounts of users under the age of 16. While the law does not penalise young Australians who try accessing social media after its enforcement, platforms which fail to block them risk fines of up to $33 million.
According to the Australian government, the curbs aim to protect young people from “pressures and risks” they may be exposed to while logged in to social media accounts. These come from design features that encourage them to spend more time on screens, while also serving up content that can harm their health and wellbeing. Earlier, a government regulator, through a survey, found that over half of young Australians have faced cyberbullying on social media platforms.
To be sure, dating websites, gaming platforms, and AI chatbots have been excluded from the law, even as the latter has recently made headlines for allowing children to have “sensual” chats with the platforms. Apart from tech companies, the Australian Human Rights Commission has also said that a blanket ban on social media for under-16s may not be the “right response,” as it could curtail their right to free speech.
Platforms are covered under the law
From December 10, Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X and YouTube will be required to take “reasonable steps” to prevent Australians under 16 from having accounts on their platforms. The Australian government may revisit the list depending on the evolving situation, and if young users rush to other platforms that are currently not covered.
Story continues below this ad
Australia had initially granted YouTube an exemption from the ban, citing educational value, but reversed this in July 2025 after a key regulator found it was the most cited platform for harmful content exposure among kids.
Age restrictions will apply to social media platforms that meet three specific conditions: the sole purpose, or a significant purpose, of the service is to enable online social interaction between two or more end-users; the service allows end-users to link to, or interact with, some or all of the other end-users; and the service allows end-users to post material on the service.
Australia’s rationale behind the curbs
According to the government, being logged into a social media account increases the likelihood of under-16s being exposed to pressures and risks that can be hard to deal with. This may expose them to cyberbullying, stalking, grooming, and harmful and hateful content. This may stem from social media platform design features that encourage children to spend more time on screen, while serving up content that can harm their health and wellbeing.
According to a survey conducted by Australia’s eSafety, a regulator for online safety, between December 2024 and February 2025, almost 3 in 4 (74 per cent) children had seen or heard content associated with harm online. More than 1 in 2 (53 per cent) had experienced cyberbullying. 3 in 5 (60%) had seen or heard online hate, while over 1 in 4 (27 per cent) had personally experienced it. 1 in 4 (25 per cent) had experienced non-consensual tracking, monitoring or harassment.
Story continues below this ad
The survey also found that 38 per cent had someone say hurtful things to them online, 17 per cent had their private messages, information or secrets shared, 16 per cent were sent or tagged in offensive or upsetting photos or videos, and 13 per cent were told online to hurt or kill themselves, or that they should die.
How have tech companies reacted?
While companies are complying with the law, they resisted its implementation during the consultation phase.
YouTube said since the law requires kids to use the platform without an account, “it removes the very parental controls and safety filters built to protect them — it will not make kids safer on our platform”. Meta called the law “inefficient,” and said it will “fail to achieve its stated goals of making young people safer online and supporting those who experience harm from their use of technology”.
Snap said disconnecting teens from their friends and family doesn’t make them safer, but may push them to less safe, less private messaging apps. X said it was concerned about the potential impact the law may have on the human rights of children and young people, including their rights to freedom of expression and access to information.
.