Everything we did at VYS in 2024
Plus some teasers for what we'll be up to in 2025
2024 marks Vyanams Strategies (VYS)’s first full year of supporting companies, civil society, and government in building more thoughtful digital spaces for young people. So for the last issue of this year, we’re taking a trip down memory lane. At the end of this issue, you’ll find a number of interesting news articles you may want to bookmark for your long trips back home to see family, and we’ll also share a quick preview of what’s coming up for VYS in 2025.
2024 Lookback
AI red teaming for child safety
We conducted rigorous red-teaming of large language and multimodal AI models to test vulnerabilities to child safety risks. Using adversarial models, we simulated interactions by both motivated children and malicious adults, assessing resilience against unsafe prompts and harmful escalations. Our human feedback helped mitigate risks, uncover gaps in content safeguards, and identify challenges with anthropomorphic design that led to inappropriate outputs. Our actionable recommendations refined the models, ensuring developmentally appropriate and safer behavior.
Comprehensive age assurance guidance
Age verification became a critical regulatory focus in 2024, and VYS guided several thoughtful companies through the entire age assurance journey. Our work included helping them:
Understand the landscape: We ran tailored workshops for cross-functional stakeholders and delivered detailed landscape analyses to help clients grasp the evolving regulatory expectations and practical implications of age assurance measures
Review and identify vendors: We assessed the vendor landscape for age assurance, developing scorecards for evaluating them on a number of key metrics, and recommended the solutions that were the optimal fit for each client
Implement a solution: Many sessions in codebases and sandboxed environments later, we helped our clients launch their age assurance solutions, develop their teen and parent-facing launch materials, and put in place a robust system for addressing age appeals that met legal and ethical requirements.
Youth-centred AI frameworks
We designed a comprehensive AI framework for clients building educational AI products to create age-appropriate AI experiences. This framework is built on global governance principles like the UK Children’s Code, the EU AI Act, and the UNCRC’s General Comment No. 25, along with insights from researchers, parents, and teens. Key elements we incorporated included how AI products should safeguard youth privacy, promote equity, and ensure interactions are tailored to various developmental stages. We worked closely with our partners to translate these principles into actionable product interventions, enabling them to deliver engaging, safe, and ethical tools for young learners.
Integrating youth councils into product development
Recognizing the importance of centering young people in decisions about their digital experiences, we helped clients integrate youth advisory councils into product development. From blue sky ideation to post-launch feedback, we developed a process to truly centre youth voices in policy and product decision-making. Our flowchart process outlined how to build accountability mechanisms into consultations, how engineering teams can account for feedback loops, and how to structure the engagements themselves to generate meaningful dialogue among the youth stakeholders.
Reducing the viral spread of harmful content
We partnered with a client to tackle the challenge of viral self-harm and suicide-related content spreading through recommendation systems. By analysing platform dynamics and user behaviour, we identified a few key factors driving the virality of such material on the network. Our recommendations included algorithmic tweaks to downrank harmful content, improved detection mechanisms, and user-centric interventions to redirect at-risk individuals to supportive resources. These interventions demonstrated how thoughtful product design can address critical safety concerns without stifling user expression.
Policy and product designs for safer youth experiences
We worked across diverse sectors to embed safety by design into product and policy strategies. This included mapping youth user journeys for a marketplace client to identify risks and design targeted mitigations, crafting privacy policies for a gaming platform to ensure the ethical handling of youth data, and translating academic and industry research into practical product roadmaps for a civil society organization focused on youth safety outcomes.
So what’s next?
2024 was the beginning of an exciting journey for us, and 2025 will be a watershed year for many key child safety developments across sectors like AI, gaming, social media, and online marketplaces. A few things we have in store:
Rebranding for the long haul: A new look and feel for our brand is coming, including a name change (hint: It won’t be that different), colour palette, a potential shift to American English (!) and a streamlined brand experience across different platforms.
Regular, accessible analyses: Quire’s insights seem to be valuable for a broader range of stakeholders than we initially expected, so we will work to produce a more extensive range of analyses in the new year that more people can easily use in their work.
A platform for our projects: We’ve heard repeatedly from clients that it would be great to more quickly manage, and tweak our engagements, so we are working on just that in the new year.
In the news
"All businesses that target children, have children as end users, or otherwise affect children, have a responsibility to respect children's rights in the digital environment." - Afrooz Kaviani Johnson, UNICEF, at the Internet Governance Forum 2024
There has been a huge wave of news these last few weeks, which should send you into the holidays with enough reading materials on those long trips back home.
Regulatory moves around age-appropriate content: Elon Musk has thrown his weight behind the Kids Online Safety Act, calling for it to pass in the House of Representatives by the end of the year, but House Speaker Mike Johnson has said it is unlikely to come to the floor. X worked with senators to introduce additional language that restricts the bill from regulating content, aiming to address fears over censorship and focus purely on product design processes. In the UK, Ofcom released its first codes of practice around illegal harms, mandating senior accountability for compliance, improved moderation with robust training and accessible reporting systems, and enhanced algorithm testing to curb illegal content. On the other side of the world, China’s newly announced "Minor Mode" initiative for mobile developers will require apps to implement age-appropriate content filters that emphasize educational content and “life skills”. Device manufacturers will also need to restrict screen time for minors to 3 hours a week when playing handheld games, along with offering mandated parental controls.
New teen safety tools: In response to significant public and regulatory scrutiny, Fortnite and Character.ai have each launched new teen safety tools. Fortnite’s new suite of text chat filters include enhanced parental controls, default text chat restrictions for players under 13, and stricter moderation tools to improve in-game safety. Character.ai shared that it has been building a separate LLM for teen users and is planning to introduce parental controls. While the latter’s updates are more abstract, they do seem to have recognized the need for prominent disclaimers to remind children that chatbots are not real companions, a key recommendation from our age-appropriate AI framework.
Vulnerabilities in app store oversight: Two new articles reveal a significant gap in child safety protections at the app store level. A New York Times investigation revealed that apps containing CSAM were available on Apple and Google's platforms, raising concerns about the effectiveness of their content moderation practices. Similarly, a BBC investigation found that top mobile games in the UK failed to disclose the presence of loot boxes, which are in-game purchases with randomized rewards, despite regulations requiring such transparency.
Child sexual exploitation and generative AI: The National Center for Missing & Exploited Children (NCMEC) reports over 7,000 incidents involving AI-generated child exploitation in the past two years, highlighting the technology's role in producing synthetic media that harms children through harassment, exploitation, and emotional distress. Thorn also published a three month progress report on the principles that leading AI companies signed on to earlier this year, documenting updates companies have undertaken since they made these voluntary commitments.


