White House Unveils National AI Strategy, EU Deadlock Halts CSAM Detection Framework, & Pinterest CEO Encourages Social Media Ban
Digital Childhood Briefing, by Vys - March 24, 2026
In this issue: The White House released its National Policy Framework on Artificial Intelligence, with a dedicated section to protecting children and empowering parents. EU lawmakers have failed to agree on the extension of the ePrivacy derogation–a measure governing how online platforms tackle child sexual abuse material (CSAM). Brazil’s Digital ECA has taken effect, prompting major platform responses; Rockstar Games has halted direct sales of its titles in Brazil, while Reddit has temporarily suspended under-16 users. And Pinterest CEO Bill Ready released a TIME op-ed encouraging governments to ban social media for under-16 users.
Regulatory updates
The White House released its National Policy Framework on Artificial Intelligence, with a dedicated section to protecting children and empowering parents. The framework emphasizes privacy-protective age assurance, aligning with our predictions that age assurance is becoming infrastructural–see here for details on what this means in practice. AI platforms and services likely to be accessed by minors should also implement features that reduce risks of sexual exploitation and self-harm to minors (For more guidance, Vys developed an age-appropriate AI framework to help companies evaluate responsible design.) Notably, the framework does not preempt state laws around minor protection, meaning that state legislation is likely to accelerate. We recommend that product and policy teams closely track this evolving landscape–and we can help!
(Exploring features or policies that support kids and teens? Reach out to schedule a consultation.)
EU lawmakers have failed to agree on the extension of the ePrivacy derogation–a measure governing how online platforms tackle child sexual abuse material (CSAM). The measure, which imposes a voluntary system for companies detecting and removing CSAM, creates a legal vacuum that pits privacy activists against advocates of stronger online safety protections. In response, a co-signed statement by Google, LinkedIn, Snapchat, Meta, Microsoft, and TikTok urged lawmakers to “swiftly agree on a way forward” to ensure legal clarity for companies.
Brazil’s Digital ECA (Brazil’s Child and Adolescent Statute), a comprehensive online safety regulation, took effect on March 17. With penalties of up to 10% of companies’ global revenue, the legislation requires technology companies to immediately take down content related to child exploitation, prohibits offering gambling to minors (including loot boxes in electronic games), and requires under-16 accounts to be linked to parental supervision tools. In compliance with the legislation, Rockstar Games has already announced that its digital titles will “no longer be purchasable from the Rockstar Games Store or Games Launcher” for Brazilian users (however, titles will remain available on other storefronts, like Xbox or Steam). Reddit has also temporarily suspended under-16 users in Brazil as the company develops integrations for the parental supervision rules. For more details on changes other companies have made, see our previous issue.
(We have supported clients in getting ECA-ready - if this is an area of interest for your team, please reach out to hello@vyanams.com)
Ofcom, the UK’s online safety regulator, is ramping up its scrutiny of major tech companies. The regulator is demanding that social media firms enforce their minimum age rules with robust age checks. Facebook, Instagram, Roblox, Snapchat, TikTok, and YouTube are required to publicly report their plans to the regulator by April 30. Ofcom’s demands include implementing effective minimum-age policies, embedding stricter controls to ward off grooming, safer algorithms for children, and banning product testing on children. Ofcom also fined imageboard website 4chan £450,000 for not implementing age checks to prevent children from accessing pornography, citing data from the Children’s Commission, which found that 59% of British children had accidentally stumbled across pornography. 4chan refused to pay all previous fines, with the company’s lawyer responding to the notice with an AI-generated hamster wearing a Godzilla costume and claiming that “[the company] is breaking no law, and its conduct is expressly protected by the First Amendment.”
A document from the UK Online CSEA Covert Intelligence Team (OCCIT) found that pedophiles are strategically exploiting TikTok’s gift system to reward children for livestreaming pornographic content. Children were observed to be selling self-generated child pornography and self-harm content in exchange for virtual currency to fund their gaming activities on Fortnite and Roblox. The document was provided to Baroness Kidron amid a House of Lords session evaluating a social media ban for under-16s.
The UK government is safety testing AI toys amid warnings from researchers over their impact on children. AI-powered toys, like Bondu and Miko, were found exposing children’s chat transcripts through unsecured web portals, prompting heightened regulatory scrutiny. Carried out by the UK’s Department of Business and Trade, officials are placing toys through real-life scenarios to evaluate responses. If a toy is determined to be unsafe, the government could intervene via the Product Safety and Metrology Act to tackle unsafe products sold to Brits.
Downloads for virtual private networks (VPNs) in Australia are rising exponentially, with 3 of the 15 most downloaded smartphone apps on the App Store being VPNs. A report by parental control software Qustodio also found that one-fifth of Australian teenagers under 16 continue to use social media two months after the ban’s implementation, raising concerns about the effectiveness of platforms’ age assurance methods.
A lawsuit filed in California’s federal court is targeting xAI for producing abusive sexual images of minors, with the plaintiffs claiming that xAI did not take basic precautions to prevent their image models from producing pornography. Australia’s eSafety commissioner also warned that CSAM was “particularly systemic” on X, with the content being more accessible than “other mainstream service[s].”
Florida’s Attorney General, James Uthmeier, is launching a civil investigation into Discord, described as a “safe haven” for predators communicating with children. According to Uthmeier, “we’ve brought investigations into Snapchat, Roblox, and others, and we’ve learned [that] all roads lead to Discord.” The investigation will determine “evidence of wrongdoing” as justification for civil action by evaluating how the company addresses risks to children.
New Jersey Governor Mikie Sherrill announced a legislative package of child safety bills, titled Kids Code, on March 18. The package establishes safety standards for children online, including mandating warning labels on social media platforms for content that could harm children’s mental health and banning cell phone use during class in public schools.
Chatbot regulation continues to heat up. Pennsylvania’s Senate approved the Safeguarding Adolescents from Exploitative Chatbots and Harmful AI Technology Act, or SAFECHAT Act (SB 1090), requiring chatbots to disclose their nonhuman status and implement safeguards, like crisis hotlines, to prevent chatbots from producing suicide and self-harm content. The bill is now advancing to the House. A chatbot bill in Washington (HB 2225) is heading to Governor Bob Ferguson’s desk following House and Senate approval. The bill similarly mandates chatbots to issue clear disclosures when users request health advice, create protocols for detecting suicidal ideation, and provide referral information for crisis services.
Governments in Southeast Asia are proposing stricter regulations to ensure children’s safety. The Philippines’ Secretary Henry Aguda of the Department of Information and Communications Technology (DICT) announced that it’s placing Roblox on notice due to reports of grooming on the platform. In a Facebook post, Secretary Aguda threatened the platform with restrictions or suspension for failure to tighten its child safety systems. However, it remains unclear if the threats will be followed by unilateral action by the government. In the lead-up to Indonesia’s social media ban for under-16s, which takes effect on March 28, the Elementary and Secondary Education Ministry is introducing guidelines on the use of electronic devices at schools, recommending teachers to limit screen time and designate screen breaks and areas for gadget use in schools.
Jay Edelson, who represents plaintiffs in high-profile cases of AI chatbot harms, is anticipating that future cases of AI harms will involve “mass casualty events,” citing examples of perpetrators confiding in AI to plan attacks. While previous cases involve self-harm or suicide, Edelson’s firm is investigating several mass casualty cases to establish a link to AI systems that reinforce vulnerable users’ delusional beliefs.
Industry News
Pinterest CEO Bill Ready released a TIME op-ed encouraging governments to ban social media for under-16 users. Rejecting statements from tech CEOs condemning the ban (like Snapchat CEO Evan Spiegel), Ready claimed that by defending the status quo, we risk endangering children. As he argues, “imperfect protection is better than none,” encouraging governments to impose clear standards that hold social media companies accountable for delivering age-appropriate experiences.
Amid the release of the BBC documentary, Inside the Rage Machine, whistleblowers alleged that senior management at Meta and TikTok allowed harmful content on people’s feeds to maximize engagement metrics. A Meta engineer claimed that senior management instructed him to allow “borderline” harmful content (like conspiracy theories) to compete with TikTok, while a TikTok employee claimed they were instructed to prioritise cases involving politicians to avoid threats of regulation over harmful reports featuring children.
Meta announced product updates amid lawsuits in California and New Mexico, including expanding AI content moderation to reduce reliance on third-party reviewers. While human review remains, AI is intended to handle repetitive and evolving harms, with early tests showing it detects twice as much adult sexual solicitation content and cuts errors by over 60%. The company is also removing end-to-end (E2E) encryption from Instagram DMs due to limited usage, following TikTok’s similar decision for private messaging. However, the feature will be retained on WhatsApp and Facebook Messenger. Additionally, WhatsApp is introducing parental controls for under-13 users, allowing parents to manage contacts and review unknown message requests. Rollout will be phased, with no global timeline announced.
Google Play will start screening users under age 18 by the end of March to prevent downloads of inappropriate apps, like dating services or platforms with sexual content. YouTube also plans to send under-18 users screen break reminders and restrict them from repeatedly viewing content idealizing body weights or promoting social aggression. The restrictions come alongside Singapore’s roll-out of age assurance measures, required by the country’s Infocomm Media Development Authority (IMDA).
Research & civil society
Researchers from Cambridge University found that AI toys “misread emotions and responded inappropriately” to children. Evaluating pre-schoolers’ interactions with the voice-activated AI toy Gabbo, researchers found that children struggled to converse with Gabbo, particularly when the toy talked over them or responded awkwardly to declarations of affection. The researchers raised concern that “children may be left without comfort from the toy and without adult support,” calling for tighter regulation of AI-powered toys for toddlers.
Child safety group Fairplay, which researches the impact of screentime on early childhood, is blasting YouTube for its partnership with Animaj, a children’s entertainment company that produces AI content for children. Termed as AI slop, a Fairplay representative warned that the content’s mesmerizing visuals and music “puts developing children at risk of harm” by increasing their screentime and displacing offline activities.
A study in the Oxford Journal of Public Health found that social media use or gaming frequency was not predictive of internalizing symptoms, like anxiety or depression, among adolescents. The study’s findings run counter to widespread assumptions that technology use is detrimental to children’s mental health, highlighting the need for more nuanced perspectives that consider individual differences in social media use.

