Online safety for kids and teens: A Vys biweekly brief
In this edition: App makers have stepped up their DC lobbying to make the burden for age assurance fall to the Android and Apple app stores, while Google Wallet is introducing its own age verification solution with Zero Knowledge Proof. Child influencer welfare is on the agenda in Brazil, the UK, and in the US states. The makers of AI chatbots are working out how to handle minors, with Instagram blocking access, OpenAI mending its guardrails, and Google expanding access to under-13s. Meanwhile, Common Sense Media concludes that social companion chatbots are far too risky for all minors.
Regulatory updates
The Kids Online Safety Act (KOSA) had accumulated a lot of momentum by the end of last year, but has yet to be reintroduced this year. The Verge explores how this came to be: lobbying from Meta and Google reinforced doubts about censorship from critics on both sides of the aisle, including the House Speaker. The article goes on to examine the chances of passage for KOSA (not good) or more targeted online child safety bills (better!) in this Congress amid the big tech truce with President Trump and the attacks on regulatory agencies.
Meanwhile, app makers Meta, Match Group, and Spotify launched a new lobbying group, the Coalition For A Competitive Mobile Experience, which is pushing for Google and Apple to be tasked with age verification responsibilities. Snap CEO Evan Spiegel published an op-ed in The Hill supporting a new bill to this effect, arguing that this approach is simpler, more secure and privacy-preserving. Utah’s law requiring app store age verification has just been enacted, and will go into effect a year from now. A statement from the Family Online Safety Institute briefly lays out some problems with Utah’s law, including the practical problems with multiple users on one device, leaving out non-app-based platforms, and overemphasis on binary parental consent.
A group of UK civil society organizations has criticized Ofcom's protection of children online proposals and codes for the Online Safety Act as overly cautious. Acknowledging that some specific “small” improvements have been made since the consultation drafts, the Online Safety Act Network accuses the regulator of being more receptive to business concerns than those of civil society. In particular, they observe a gap between potential harm and required mitigation measures that provide safe harbor, and an overemphasis on financial burden for companies without considering the societal costs of the online harms.
Harms to child influencers are also in the spotlight: in the UK, a senior MP from the governing Labour party has noted that there are ongoing harms to young people compensated by brands, and that neither performance laws that apply to child actors nor the new Online Safety Act rules are protecting them. While the focus of the British government appears to be on the brands, in Brazil, the Labor Prosecutor’s Office is continuing to scrutinize TikTok, which received a fine last year for allowing children to monetize accounts. Reporting by Rest of World describes a widespread problem, with children not only compensated directly by TikTok and Kwai for content (including videos of potentially hazardous street-hawking), but also through the digital marketing platforms Cakto and Kirvano, where get-rich-quick schemes are a popular product promoted by some child influencers. FOSI recently discussed the issue with a US lens: Utah has passed a bill protecting child influencers, and New York State Assemblymember has introduced a similar bill.
The Italian regulator AGCOM has approved age verification measures for online pornographic content. Although the regulator describes its approach as “technology-neutral,” it has stipulated a mechanism whereby age verification providers cannot see which service the age proof is being issued for, and the proof shared with the website/platform contains no identifying user information. A mobile app will be used to generate and certify proof of age for accessing restricted content.
Complaint against Meta filed at FTC alleging widespread Horizon Worlds COPPA violations. Fairplay submitted the complaint, which cites the evidence of researchers who “heard the voices of children” on the platform before accounts for under-13s were introduced, as well as whistleblower testimony that the overwhelming presence of underage users accessing Horizon Worlds with non-child accounts was known to executives and that documenting discussions about this was deliberately avoided.
Florida’s attorney general has subpoenaed Roblox over child safety concerns, demanding information about the gaming platform’s age verification systems, chat moderation practices, parental controls, marketing strategies, abuse reports involving Florida users, and all records contact with NCMEC. This follows two California cases of children meeting an abductor and a sexual abuser via some combination of Roblox and Discord.
New Zealand’s 2024 Digital Child Exploitation Transparency Report reveals over one million blocked attempts to access CSAM by the country’s Internet filter for URLs of known CSAM (and some other egregious sexual abuse), administered by the Department of Internal Affairs. Other report highlights include 14 children safeguarded from further risk of harm and 13 offenders successfully prosecuted for related offences.
Industry news
Minors made up 27% of the Instagram follow recommendations to users identified as “groomers,” according to a 2019 internal report cited by the FTC while arguing that Meta’s acquisition of Instagram harmed consumers as Meta under-resourced the product. In total, 7% of Instagram follow recommendations made to adults were minors, and 54% reports by minors of inappropriate comments were regarding comments made by adults.
Instagram is blocking minors from accessing chatbot platform AI Studio, 404 Media discovered while following up on the recent revelations about the willingness of Meta’s chatbots to engage minors in sexual roleplay. At the same time, TechCrunch’s testing showed that OpenAI also allowed minors to generate erotic conversations. The leading AI lab described this as a “bug” and claimed to be rolling out a fix. Both OpenAI and Meta have made attempts to loosen guardrails in recent months. Their typically more cautious competitor, Google’s Gemini, has begun to roll out their chatbot to children under 13. The company emailed parents who supervise children’s accounts to explain their access options and recommend that they discuss responsible use of Generative AI with their children.
Discord is verifying UK and Australian users’ age the first time they are exposed to explicit content or when they try to lower their filter settings to see such material. Users have the option of an on-device facial scan or uploading an ID. Although this is described as an “experiment” by the company, both of these markets are phasing in regulations that will require age assurance.
Google Wallet introduces Zero Knowledge Proof for age verification. The highly privacy-preserving technology will be integrated into the app and accessible wherever government IDs can be added to Google Wallet. Bumble is mentioned as an early service that will accept this age verification and Google is making the tech open source so that other identity apps can adopt the same approach.
FBI has opened 250 investigations tied to the 764 network. Despite the 764 founder serving a jail sentence since 2023, every FBI field office has open investigations into the loose association which is organised on Discord and elsewhere and coerces typically teenage victims, into acts of sexual abuse, self-harm, bomb threats and violence.
Offenders and experts describe how legal online pornography leads to CSAM viewing in this article from The Guardian with the context of the UK’s incoming age verification requirements. Factors under scrutiny include the online accessibility of not only CSAM but also other extreme (often legal) material, and pornography in general. Some blame algorithms for pushing increasingly boundary-pushing material on those with a pornography addiction.
Dozens of YouTube channels directed at children are showing AI-generated cartoon gore and fetish content. An investigation by WIRED identified concerning videos and channels, often featuring cats or minions, that either explicitly or implicitly appeared to be made to appeal to children. Changes made since the 2017 Elsagate scandal mean that these “typically” are not appearing on YouTube Kids.
Research and civil society
France launched a national online survey to gauge the psychological impact of TikTok on young people. The consultation, on young people “minors’ use and perceptions of social networks, and TikTok in particular” is part of a parliamentary inquiry probing the platform and its impacts on young users. A group of French families has sued TikTok over its alleged impacts, including the death by suicide of two young users.
Balancing opportunities and risk in social media age-gating policymaking. This UNICEF policy note recommends stakeholders conduct a child rights impact assessment for proposed policies and gives ten considerations for age-based limits for social media. These include: listening to children, setting clear objectives, making the age restriction one element of an effort for systemic and comprehensive change, addressing implementation challenges, and evaluating the impact of enacted policy.
Long reads
AI Risk Assessment: Social AI Companions (Common Sense Media)
Children's online “user ages” (Ofcom)
“One day this could happen to me”: Children, nudification tools and sexually explicit deepfakes (Children’s Commissioner for England)
Minors’ health and social media: an interdisciplinary scientific perspective (EC / European Centre for Algorithmic Transparency)
Social media use in adolescents with and without mental health conditions (Nature Human Behaviour)
Future of the DSA: safeguarding minors in the digital age (Centre on Regulation in Europe (CERRE))
Developing children’s algorithmic literacies through curatorship as media literacy (Learning, Media and Technology)