What will influence companies' child safety roadmaps in 2025?
The waves shaping safety and wellbeing for kids and teens
Two weeks into 2025, we have seen a slew of predictions around online child safety and what companies will need to build. Yet if there’s anything we know about the online child safety space (and the digital world at large), it’s that things move fast. Just look at the last two weeks in the United States alone, as the Supreme Court heard oral arguments on banning TikTok, and Meta moved to reduce its proactive moderation of harmful content.
While these round-ups are insightful in understanding the possible futures ahead, they’re also based on a singular snapshot in time. In this first issue of Quire in 2025, we’re helping product and policy teams chart a steady direction for the year based on what we know, what we’re working to learn, and what we value.
What should tech companies, civil society, and governments monitor and pay attention to?
Where are the key waves of action emerging in this space, and how are these tides shifting?
How can stakeholders proactively consider, plan for, and respond to these unfolding issues?
None of this is simple—which is why we want to explore four key areas of focus in this issue that will, in some way, shape the world of online child safety this year:
Within these thematic areas, we’ll examine the changing nature of online child safety risks, the regulatory requirements, expectations, and attitudes that will define what can/should be done, and the prevailing opportunities and challenges faced by different stakeholders.
As VYS enters its second year of operation, we’re excited to continue being your center of excellence to navigate these waves. As you surf these waves of child safety pressures in 2025, we’re here to help.
Age Verification
Context: After years of debate, we now have general consensus that a “one-size-fits-all” safety approach simply doesn’t work for every minor under the age of 18. As calls grow to enforce age-dependent permissions for certain online experiences, the question remains as to how companies can best implement age assurance mechanisms while preserving user choice, privacy, and autonomy.
Age assurance has become an increasingly hot topic as some governments attempt more extreme angles towards online child safety, raising the question: what does effective implementation of age-dependent regulation actually look like?
A few stories that we’re tracking around the world include:
Australia takes action: Rather than running the risk of harm to young children using social media, Australia recently passed the Social Media Minimum Age Bill, which we covered in a recent post. Over the next twelve months, the law will require social media platforms to take “reasonable steps” to bar under-16 year olds from creating accounts in the first place.
Similar moves across Europe and Florida: Europe has also seen a similar smattering of age-bound or parental consent-bound barriers to social media access, notably in the form of a 2023 French law and a recent Norwegian pledge to ramp up its own age minimum rules. And as of the new year, a Florida law aimed to block social media usage from children under 14 went into effect—though enforcement is expected to be delayed as the law faces tight constitutional challenges. India’s new draft data privacy protection bill also contains significant age assurance and verifiable parental consent requirements.
KOSA’s failure in the House: Parent and civil society advocacy groups had broadly supported the The Kids Online Safety Act and its push for companies to demonstrate a duty of care in their product and policy design processes. With the bill stalling in the House, it is possible that these groups may pivot to pushing for federal age-based bans similar to those being considered at a state and international level.
Key Takeaways: The advent of increasingly sophisticated age verification technologies might just help raise a few solutions, which we broke down in a previous post and will dive into further this month. Of course, the topic is still deeply debated and dependent on each use case—whether that be at the level of an individual platform, app store, internet service provider, or otherwise. But one idea that’s gaining traction is device-level verification—so much so that the International Center for Missing & Exploited Children (ICMEC) made a statement in favor of it last year.
What Companies Can Do:
Collaborate with policymakers and civil society organizations to share what age verification models may look like, map tradeoffs for different use cases, and establish industry standards for age verification.
Monitor unfolding global regulatory developments where age assurance will be critical to compliance—and proactively assess the feasibility of different age verification models in each case.
Design and implement a privacy-preserving age verification strategy that can flexibly adapt and modulate to different regional legal requirements, to avoid being shut out of key markets.
Addiction & Dependency
Context: Social media continues to flourish in attracting young users, and for substantial amounts of time. Over half of U.S. teens spend four or more hours per day on social media—with 41% of those with the highest social media usage citing their overall mental health as poor or very poor.
These concerns around addiction and dependency are now spilling over into the gaming industry, which provides some of the most popular social venues for children and teens today. Indeed, the connection-based attributes of social gaming—such as multiplayer modes, user-generated content, and in-game communication tools—have translated with wild success to an eager audience of young gamers. In the United States, more than 90 percent of children play video games, including 85 percent of all teens.
Mental health, addiction, and gender: The prevalence and intensity of youth gaming habits are fueling growing waves of concern—particularly as it pertains to addiction, mental health, and the lingering impacts of the COVID-19 pandemic on children and teen’s abilities to connect with the world around them. Additionally, the impact of gaming on children has become increasingly correlated with gender—with boys being more likely to identify as a gamer, become addicted, and experience bullying while playing.
Social media and information dependency: As much as young people use social media for entertainment and connection, they’re also increasingly relying on it for information and news—leading to resurgent concerns surrounding the impact of online misinformation and radicalization, particularly in alt-media spaces. Over 60% of Gen Z teenagers cited regularly turning to social media for the news—dutifully following in the footsteps of young American adults, 40% of whom regularly obtain news on TikTok. Meanwhile, while inundated with AI-generated content, platforms are tasked with an increasingly complex moderation challenge of balancing safety and excessive content-filtering for these young users.
Key Takeaways: As social and gaming platforms continue to grow as a go-to “third place” for youth, these companies will be under greater pressure to respond to parental concerns and enact reasonable safety protocols—as seen in Roblox’s updates last November. Similar parental pressure will continue fomenting around social media. At the heart of all these conversations is a question of how young people make sense of the world around them using technology—and namely, what filters, algorithms, and online communities are informing their ability to do so.
What Companies Can Do:
Partner with researchers and civil society organizations to better understand the experiences of young people across varying genders and types of gaming / social media platforms—and inform what effective policy and product mitigations may look like. (See our previous post on civil society partnerships in gaming.)
Test and implement platform design features that help detect overuse and nudge younger users towards healthy usage habits, such as time management tools and parental controls.
Strategize compelling, evidence-based arguments that make both the ethical and business cases for combatting youth addiction and dependency—which can be leveraged when communicating between internal teams and leadership.
AI-Generated Harms: Companions and CSAM
Context: Today, risky online interactions don’t necessarily require human-to-human contact. The increasing sophistication of generative AI has made sure of that. But the emotional, social, and psychological implications for children remain profoundly concerning—particularly in the context of relationships with AI-generated companions and the proliferation of AI-enabled CSAM.
AI companion industry booms: AI companions made headlines in 2024, and for troubling reasons—including Candy.AI offering a “build-your-own” AI partner service and a 14-year old boy taking his life after a Character.AI chatbot encouraged him to do so. As debate on the intersection of mental health and AI-based socialization continues, that very companion technology continues to progress—making these “inner worlds” increasingly realistic in their interactions with children.
“Nudify” apps proliferate: Recently the focus of a 60 Minutes piece, “nudify” apps that nonconsensually undress clothed images of real humans—including minors—continue to spread. But it’s not always strangers who are manipulating these apps to create NCII—in some cases, youth and teenagers are leveraging these tools to target their peers. Frustrated by slow responses from Big Tech, governments are working to take action—including a lawsuit against nudify websites by San Francisco City Attorney David Chiu and a crackdown on tech giants to remove such apps from Australian eSafety Commissioner Julie Inman Grant.
Key Takeaways: As parent-driven and government-led lawsuits pile up, so have public calls for more responsible development (including restrictions, warnings, and content filters tailored for young users), more stringent regulatory oversight of these technologies, and in some cases, complete bans on its usage. In order to adequately tackle AI-enabled CSAM, tech companies must work to detect and close loopholes exploited by perpetrators, along with strengthening reporting and response mechanisms across the ecosystem.
What Companies Can Do:
Implement age-gated features for AI companions to ensure their functionality is appropriately focused on education or entertainment—which may require further internal exploration of age verification, as discussed above.
Revamp and conduct regular risk assessments aimed at curbing developer accounts from utilizing your company’s single sign-on, payment, or app store services for such “nudify” platforms.
Double-down on commitments to cross-industry coalitions and by actively contributing to cross-platform detection and reporting systems for known abusive content and perpetrators.
Safety by Design, Duties of Care, and Remedies for Victims
Context: In response to the growing online risks facing children, governments around the world are building regulatory regimes that tech companies will need to heed—both by assessing risks of their existing practices and architecting solutions that demonstrate real evidence of compliance. As the hotly contested Kids Online Safety Act (KOSA) heads back to legislative square one in the United States, other countries are moving ahead on tech companies with both bills in the works and laws on the books. Common themes include children’s safety by design, legal duties of care placed upon platforms, and providing victims with remedies for harm.
Mandating Safety by Design: In the UK, Ofcom rounded out the end of the year by sharing new risk assessment guidance and illegal content codes of practice under the Online Safety Act (OSA), which requires platforms to be “safe by design”—reflecting growing emphasis on upstream interventions to protect children online. Companies will have until March 17, 2025 for compliance.
Developing Duties of Care: After the EU’s Digital Services Act (DSA) went into full effect in February 2024, the European Commission is now turning to particular provisions within the law. This includes drafting guidelines for platforms regarding Article 28(1), which obligates platforms accessible to children to “ensure a high level of privacy, safety, and security of minors.” From July to September 2024, the EC held a call for evidence to help steer its drafting process—with prominent civil society groups pitching in—and plans to release and adopt the guidelines by summer 2025.
Supporting Victims via Remedies: Just as integral to regulating online harm is developing effective, accessible remedies for youth victims. In Singapore, the Ministry of Digital Development and Information (MDDI) and the Ministry of Law (MinLaw) are preparing new legislation and groundwork for a new agency to help enhance online safety and strengthen support for victims of cyberbullying, harassment, CSAM, and other online harms. Features of the proposed agency have been noted to bear striking resemblance to Australia’s eSafety Commissioner—nodding to a growing trend of government-led accountability bodies for private tech-enabled harm.
Key Takeaways: In the theme of accountability, regulatory and legal pressures on tech companies continue to foment—particularly in U.S. courts, including the aforementioned lawsuit against Character.AI and the seemingly never-ending TikTok battle. As governments continue to build digital regulation, they’re increasingly placing the onus on tech companies to provide sufficient evidence of fulfilling their respective obligations—not just as publishers of content, as was in the past, but also as product manufacturers.
Parent and youth advocacy will also contribute powerful voices towards deciding what safer online environments ought to look like. Though they may not always be of the same mind, growing frustrations from parents and youth alike reflect a deeper fracturing of public trust in tech companies and lawmakers to sufficiently do their jobs—whether that means instituting enforceable, meaningful regulation or design changes that actually improve tech products and platforms.
What Companies Can Do:
Proactively integrate safety-by-design principles into your product development life cycle—including formalizing safety criterion, setting measurable safety goals, and conducting multidisciplinary risk assessments.
Create compliance roadmaps with clearly benchmarked goals that can be adapted as similar regulations arise in different jurisdictions.
Engage with parent and youth stakeholders by providing feedback opportunities via focus groups, workshops, and advisory councils—along with sharing regular transparency updates on evolving safety initiatives, challenges, and user-friendly direct support resources.
As you surf these waves of child safety pressures in 2025, we’re here to help. Reach out to schedule a free initial consultation today.