Australia has exempted YouTube from its new social media restrictions for minors, a decision that has sparked criticism from mental health and extremism experts who warn that the platform exposes children to addictive and harmful content.
Under landmark legislation passed in November, the Australian government will ban access to TikTok, Snapchat, Instagram, Facebook, and X for users under 16 by the end of 2025. Social media platforms will be required to impose strict age verification measures or face substantial fines.
However, YouTube, owned by Alphabet, will remain accessible to all ages, with the government stating it is an educational tool rather than a core social media platform.
The original plan included YouTube in the restrictions, but after consultations with company executives and children’s content creators, the government granted an exemption. A spokesperson for Communications Minister Michelle Rowland said that while YouTube is widely used for entertainment, it also provides educational and informational content relied upon by children, parents, and schools.
Despite this justification, six extremism and mental health researchers told Reuters that the exemption contradicts the main goal of the ban, which is to protect young users from harmful content. Studies show that YouTube is the most popular social media platform among Australian teenagers, with 90% of 12- to 17-year-olds using it.
Critics argue that YouTube hosts similar dangerous content as the banned platforms, including extremism, violent material, and highly addictive videos targeting young audiences. Lise Waldek, a lecturer at Macquarie University’s Department of Security Studies and Criminology, stated that YouTube is a major platform for spreading extremist content, violent videos, and pornography. Helen Young, a researcher on radicalization, noted that YouTube’s algorithm often directs young male users toward far-right, misogynistic, and racist material.
When asked about these concerns, a YouTube spokesperson defended the platform’s content moderation, stating that it promotes “quality content that encourages respect” while limiting recommendations of videos that could be problematic when viewed repeatedly. YouTube also said its moderation policies have become more aggressive, with broader definitions of harmful content detected by its automated systems.
To test how YouTube’s algorithm directs content, Reuters created three accounts under fictitious names of minors and conducted searches on topics like sex, COVID-19, and European history. The results revealed that within 20 clicks, searches led to misogynistic content and conspiracy theories, while a third account exposed racist material after 12 hours of intermittent scrolling. Searches for misogynist and racist commentators quickly led to harmful content in fewer than 20 clicks.
After reviewing Reuters’ findings, YouTube removed an interview with an Australian neo-Nazi leader for violating hate speech rules and took down an account promoting misogynistic content. However, four flagged videos remain online. The platform reiterated that it has strict policies prohibiting hate speech, harassment, and graphic content, but did not comment on the remaining flagged videos.
As YouTube remains accessible to Australian minors, concerns persist over whether the exemption aligns with the government’s efforts to protect young users from online harm.