Meta has removed more than 6,35,000 accounts across Instagram and Facebook for violating child safety policies, in what it calls a significant escalation in its efforts to protect teens from online predators. The announcement, made via Meta’s official newsroom on July 23, details a dual strategy of mass account takedowns and new in-app safety protections, particularly for teenagers.
According to the company, 1,35,000 Instagram accounts have been removed for violating its child safety policies, including posting sexualized content or soliciting imagery from accounts that appeared to belong to children under 13. Meta has also taken down over 5,00,000 additional accounts connected to these, including both Facebook and Instagram accounts, some of which were backup or duplicate profiles intended to evade enforcement.
The enforcement update comes alongside new features added to Instagram’s Teen Accounts, a product version introduced in 2024 that applies stricter settings to users between 13 and 17. These accounts are now equipped with clearer safety signals in direct messages, such as alerts that display when an unknown account was created recently or appears to be in a different country. Meta has also introduced a redesigned block and report button for teens, streamlining both actions into a single tap.
Another major change affects the way sensitive content is handled in teen DMs. A nudity protection setting powered by AI and enabled by default now blurs suspected explicit images before teens open them. Meta reports that 99% of teen users have left the setting switched on. In addition, adult-run accounts that feature children, such as parenting or coaching profiles, are now subject to stricter visibility rules: they will not be recommended to users flagged for suspicious behaviour, and will have tighter restrictions on messaging and comments.
Meta says the changes are already showing impact. In June alone, teen users blocked over one million accounts and reported another million a jump the company attributes to stronger safety cues and more accessible reporting tools. It’s also using AI to detect adults who may be misrepresenting their age, and automatically moving underage users into the protected Teen Account environment when such cases are detected.
The update follows increased pressure from lawmakers, regulators, and safety advocacy groups around the world. Meta and other platforms have been criticised for failing to curb online exploitation, especially after investigations revealed that algorithmic recommendations were leading predators to content involving minors. With legislation like the Kids Online Safety Act gaining traction in the US, Meta’s latest move appears to be both a proactive safety measure and a response to looming regulatory risks.














