Online safety for kids and teens: A Vys biweekly brief
In this edition: yet again we see age assurance news from around the world, with attention on access to pornography, social media, and the purchase of knives. In the US, a law that would enable individual CSAM victims to sue platforms is making progress in Congress. (This may be of particular concern to X, who just lost access to Thorn’s detection services.) Apple is taking a turn in the sextortion spotlight with new reporting on abuse taking place via Messages, while the company announces new safety controls. And researchers have found that, rather than just screen time, it seems to be adolenscents’ compulsive use of tech that correllates with mental health harms, including suicidal behavior.
Regulatory updates
In recent regulator actions, Ofcom has opened nine new investigations under the UK’s Online Safety Act, all but one related to protection of minors. Seven of these are aimed at smaller file-sharing services that have not responded to statutory information requests and are under particular scrutiny for possible failure to mitigate CSAM availability. Another targets a pornography publisher that does not appear to have implemented effective age verification. Across the Irish Sea, the Coimisiún na Meán has issued a statutory information notice to X regarding its implementation of age assurance for accessing “content which may impair physical, mental, or moral development of minors” in compliance with Ireland’s Online Safety Code.
As the European Commission works to finalize its guidance on the protection of minors under the Digital Services Act, the tech giants have been weighing in, with Meta lobbying for device- or app store-level verification (along with porn platform owner Aylo), and Google supporting the EU’s planned white label age verification app. Meanwhile, France is continuing to move towards a ban on social media for under-15s, along with requiring age verification for the online purchase of knives. The country’s new age verification law for pornographic sites has hit a bump, with a court suspending its implementation out of concern that a formal objection process involving both the countries hosting the services and the European Commission, required under the EU’s e-Commerce Directive, was not followed.
Australia’s age assurance trial in preparation for the December implementation of their social media ban for under-16 has announced 12 preliminary findings, stating that “age assurance can be done in Australia and can be private, robust and effective.” Though findings were generally positive about the state of the available options, one noted that there was no “single ubiquitous solution that would suit all use cases” nor “solutions that were guaranteed to be effective in all deployments.” Reporting from the Australian Broadcasting Corporation highlights how the trial revealed weaknesses in some prominent technologies. In particular, face-scanning technology regularly identified children below the cut-off as being far older, and that age estimates were only within an 18-month range with 85% accuracy. The eSafety Commissioner has not yet set requirements for compliance with the new law.
(Sidenote: We recently launched our Age Assurance Implementation Handbook, a practical toolkit for legal, product, and trust & safety teams building their in-house implementation strategies. Check out last month’s issue of Quire to learn more and purchase your copy.)
Australia’s eSafety Commissioner has also published an article on AI-generated CSAM, reviewing the current state of affairs, including the availability of “nudify” apps and the use of imagery in sextortion. The article suggests safety-by-design steps for AI companies and content platforms to take to mitigate the issue, most of which are aligned with services’ enforceable obligations under the Online Safety Act’s newest industry codes and Basic Online Safety Expectations.
Brazil’s Ministry of Justice and Public Security has raised the minimum recommended age for Instagram from 14 to 16 years, citing the availability of content featuring sex, nudity, violence, and drug use. Meta has hailed its introduction of teen accounts, but platform measures appear to not be taken into account in the ministry’s review methodology.
In the US Congress, the bipartisan STOP CSAM Act passed out of its Senate committee wit unanimous approval. Amongst other measures, including updated NCMEC reporting and platform transparency requirements regarding CSAM, the bill provides for a private right of action, enabling CSAM victims and their families to sue platforms in some cases. This includes the “reckless” hosting of CSAM, prompting a number of civil liberties organizations to speak out against the bill. Their key concern is that this standard will discourage platforms from offering reliable end-to-end encryption out of the fear that this will expose them to liability under the law.
Industry news
The Wall Street Journal zeros in on the use of Apple Messages in sextortion cases. After making contact on social media, some extorters move to the end-to-end encrypted messaging system, where the blue bubbles indicating in-ecosystem communication inspire trust and there are fewer safeguards than some other platforms—including no avenue to flag abuse other than spam.
This report coincides with Apple’s announcement of expanded tools to help parents protect kids and teens online, including the expansion of Communication Safety—which blurs, blocks or warns about sending and receiving nude images—as default to all under-18s, and the extension of Communication Limits to enable parental approval for communicating with all new contacts. Other new features include the rollout of the Declared Age Range API for age assurance and more granular age ratings in the App Store.
Thorn has terminated its contract for CSAM detection services with X citing nonpayment of invoices. At the same time, suspected bots are flooding hashtags with hundreds of posts per hour advertising the sale of CSAM, and the platform’s Communities feature is enabling groups to coalesce around openly sharing such imagery.
Research and civil society
A new study, tracking nearly 4,300 youths aged 9 to 10 on entry over four years, looked at young people’s patterns of compulsive or “addictive” technology use (e.g., feeling unable to stop using a device, experiencing distress when not using it or using it to escape from problems). By age 14, “high addictive use trajectory” for each of social media, mobile phones, and video games was between around 25% and 40%. These adolescents were significantly more likely to report suicidal thoughts or behaviors, and other mental health symptoms such as anxiety, depression or aggression. Higher addiction use trajectories were correlated with roughly double the risk of suicidal behavior in the near future. (Simply spending more time on screens at 10 years old was not associated with worse mental health outcomes.)
A survey of British parents and children, paired with a qualitative study of 15- and 16-year olds, by Internet Matters sheds light on the use of reporting and blocking tools by children. Key findings include:
around half as many children who say that they have encountered harm have reported it;
reporting is more likely when children perceive direct harm to themselves or people close to them;
rates of reporting vary widely by platform (31% on Roblox vs. 7% on Discord); and
despite most child reporters expressing overall satisfaction with processes and outcomes, a majority also identified obstacles.
The authors made recommendations in the context of the new Ofcom Codes of Practice and noted substantial alignment between the report’s conclusions and the codes’ requirements.
Long reads
Global developers’ insights into Child Rights by Design (LSE Digital Futures for Children Centre / 5Rights Foundation)
Restriction or resilience? Smartphone bans in schools: a qualitative study of the experiences of students (Dublin City University Anti-Bullying Centre)
Effects of persuasive app design and self-regulation on young children's digital disengagement (Human Behavior and Emerging Technologies)