We are at GDC this week chatting with our gaming partners about youth safety, so this will be a shorter newsletter than usual. If you are at the conference and want to connect, please reach out!
February was a busy month across the board and we contributed to a number of interesting conversations:
We spoke to Platformer about Apple’s new age assurance API, which will offer parents a way to set their child’s age at the time of setting up their Apple devices, and share that through an API with developers who can determine whether their app is age-appropriate
We spoke to Capital Brief about the complexities surrounding Australia’s age assurance trial ahead of the country’s social media ban for under-16s that will take effect by the end of the year. (Subscriber-only)
We also joined the Say Something podcast in India with Amrita Tripathi to talk about social media bans, why they are not the sole solution to child safety online, and what age appropriate design looks like in the Indian context
We supported 5Rights Foundation with their new report, Advancing Trust & Safety, sharing our perspectives on what trust & safety professionals and the broader ecosystem needs to truly be empowered to support safer environments for users
We are also going on the road for the next few weeks:
We are excited to support the second annual Trust & Safety Forum in Lille, France, hosting a discussion around social media bans and designing for youth mental wellbeing
We will also be speaking at the annual Age Assurance Standards Summit, sharing some exciting work we have been doing around building age-appropriate systems in products
Finally, we heard feedback from clients and readers that there was too much to keep up with in the youth safety space, so we have begun to supplement our monthly Quire issues with a biweekly LinkedIn digest that captures key new regulatory moves, industry developments, and civil society / research activity.
Subscribe to that digest for more regular updates. We are still evaluating whether to send these as Substack digests similar to Quire issues, so do reach out with your feedback.
Implementing an age strategy: Some frustrations and hopes
As we predicted in January, platforms are focusing on age-tiered experiences much more this year. The solution providers space has been rapidly evolving to meet those needs, with vendors offering everything from age verification services to parental consent platforms to consolidated parental controls. We have been working with companies over the last year to help them understand the options available to them in the third party ecosystem, the ongoing regulatory requirements that they need to prioritize, and whether these third party solutions can effectively integrate into their platform builds and user experiences.
A recurring theme that we have noticed in our client conversations is the perception of vested interests all around. Regulators have their goals they need to meet, solutions providers have end of quarter deals to close, all while growth and revenue pressures are increasing at platforms.
We’ve been thinking about how to address this problem for the last six months and think we are close to a solution, which we’ll share soon. Some key themes that have emerged though are:
Legal teams - either inhouse or outside counsel - are increasingly strapped in the advice they can give clients. Regulations are changing too quickly for the teams to translate the regulations into specific product or policy requirements.
Solutions providers are regarded with suspicion because they have a clear product to sell platforms and usually position their product as the definitive best way to solve the whole issue of age appropriate design
There are too many fragmented elements involved in building an age appropriate ecosystem, and solutions providers really only address one, maybe two of these pieces. For example, some of the greatest hits include:
Age verification
Verifiable parental consent
Transparency and consent
Discoverability and defaults
System mechanics and design
We are excited to share the first stage of addressing this problem soon, but if this resonates with your teams, we would love to chat more.
Currently reading
Cruz-Klobuchar bill to protect teenagers from deepfake ‘revenge porn’ unanimously passes the Senate. The TAKE IT DOWN Act would criminalize the sharing of non-consensual intimate imagery (NCII), including AI-generated NCII and require online services to establish a procedure to remove reported NCII within 48 hours. It also prohibits sharing intimate imagery of minors that is not already illegal under current laws.
Apple introduces new child safety initiatives, including an age-checking system for apps. Apple has announced a sweep of updates to its child accounts. As well as making the setup process simpler, Apple is introducing more fine-grained age ratings for App Store items and an API for apps to access the child’s age range, based on the information entered by the parent during account setup. Platformer’s coverage of the announcement featured expert perspectives from Vys.
Online risk exposure rose in 2024, but so did Gen Z requests for help. A survey of 13-to-24 year olds in six countries, commissioned by Snap Inc shows that despite troubling trends in sextortion and associated risks, increased engagement between teens and parents regarding potential harms led to a slight increase in the Digital Well-Being Index, in this third year of Snap’s calculation of the metric.
The children's manifesto for the future of AI. At the Children’s AI Summit, sponsored by the Alan Turing Institute and Queen Mary University of London, 150 British children added their voices to the policy discussions regarding AI. The resulting manifesto was presented at the AI Action Summit in Paris.