Renewed questions about the role of government in securing children’s rights
To wax poetic, what's in an age?
In the month since our first issue, governments globally have pushed to more actively regulate the ways companies build experiences for children. After the January child safety hearing in the US Senate, the Kids Online Safety Act gained the backing of fifteen more Senators across the aisle. It’s now poised to be the first piece of child safety legislation in 25 years that could even make it to the Senate floor for a vote. I shared a brief recap of the changes in the bill that preceded this on X and on Bluesky (surprisingly, the latter was the more spicy conversation of the two).
Seeing this sustained support, other legislators have started introducing their own bills too. Tech industry groups, which strongly oppose KOSA, sent a letter to congressional leaders asking them to pass an alternative Invest in Child Safety Act, which would send more funding to law enforcement to investigate child predators. Interest in COPPA 2.0, an updated version of the 26 year old act, is also increasing, as it seeks to extend protections to everyone under the age of 18, not just 13.
Up north, Canada introduced its own Online Harms Act, requiring platforms to be accountable for reducing users’ exposure to harmful content on their platforms and helping prevent its spread. Similar to KOSA, it imposes a duty of care on companies to reduce the risk of adults or children being exposed to harmful content. It specifically calls out the need to better protect children, and echoes the argument I’ve previously made - that it’s ineffective to offload all the responsibilities of online safety to parents.
Meanwhile, in Europe, the Digital Services Act officially came into effect for all companies. For youth policy, this means companies have to show that they’re designing for minors with the highest level of privacy, safety, and security, adopt standards for protecting minors, and participate in codes of conduct for protecting them. In practice, they can no longer advertise to minors based on their personalised information, need to make their terms of service easily understandable to minors, and expedite reviews of reports from trusted flaggers. Just a few small changes!
The worst form of government, except for all those others that have been tried
It’s clear by now that not all regulation is good or evidence-based. A federal judge had to remind Ohio’s attorney general that children have First Amendment rights too, by issuing a preliminary injunction against a new law that would have required people under the age of 16 to get their parents’ permission to use social media. In Florida, it fell to Governor DeSantis to veto a bill that would have effectively banned teens under the age of 16 from having social media accounts. On a panel around the future of trust & safety hosted by All Tech Is Human a few weeks ago, I highlighted that bad regulation can be a blunt tool that absolves us as a society of creating responsible experiences for them. Governments, civil society, companies, parents, and educators, have a duty to build appropriate policies and products for young people, not cut them off from experiences altogether.
In the face of increasingly alarming stories about the state of children online - from parents who exploit their children to be influencers on Instagram, to teenagers at a school in Beverly Hills distributing AI-generated nude photos of their classmates - regulation is going to continue seeming extremely attractive to policymakers and civil society groups that would have previously tried to directly engage with companies. More movement towards self-governance models and industry-wide standards could head this off, but there have been few and scattered initiatives around this.
Case in point: Age verification
Age verification is a perfect example of this storm that’s brewing around industry-driven vs regulator-driven protections. So much of developing age appropriate experiences online depends on knowing how old someone is - especially when you start distinguishing between early teens and late teens. Several countries and US states are either explicitly or implicitly weaving age verification requirements into legislation - from the UK’s Online Safety Act, to COPPA 2.0 that would extend protections to everyone under the age of 17, to several pieces of state legislation requiring both age verification and parental consent.
Although some companies have offered their own suggestions - requiring app stores to handle age verification - there have been limited industry-wide or multi-stakeholder efforts to meaningfully grapple with what age verification could look like, and what to advocate for. I recently developed a (much) more detailed version of this table below for a client, outlining the tradeoffs for their service to consider between different types of age verification. For a broader overview with more descriptions of each approach, you can check out the Digital Trust & Safety Partnership’s guiding principles around age assurance.
This is a meaningful opportunity for companies to work with policymakers and civil society groups to develop a vision for what age verification could look like in the industry. The consensus might end up being that it is not possible without some unacceptable tradeoffs, that we need more robust privacy protections in place before an industry-wide solution can be reached, that we should segment different parts of the industry, or possibly some other options that I’m sure I have left out. But engaging in this process in a transparent, inclusive manner would go a long way towards rebuilding public trust in industry to govern itself responsibly, as well as allow for smart, future-proof regulation to emerge.
Looking ahead
Looking ahead, I expect this cycle of companies being the innovative untouchable darling > a good company just doing their best > evil, potentially the cause of all social downfall to be rapidly accelerated when it comes to how we think about AI solutions. We see this already playing out in regulatory approaches to transparency from language model vendors. It seems quaint now that YouTube had a twelve year grace period between 2006 and 2018, to release the first community standards enforcement report in the industry (other companies quickly followed). In today’s regulatory climate, It’s inconceivable that any language model vendor will get a similar long leash in being transparent about their systems and the protections they have in place for users.
A few years ago, I had the pleasure of sharing Instagram’s perspective and recommendations with the Samaritans mental health charity in the UK, which convened companies to develop a set of industry guidelines around managing self-harm and suicide content online. These guidelines achieved two valuable goals: They ensured that companies were part of the development process and could, in a protected environment, discuss what was or was not practical to implement. More importantly, they provided a blueprint to the many smaller companies who may not have had the resources to build these guidelines on their own. I would love to see more collaboration between companies, policymakers, and independent researchers, facilitated by trust & safety practitioners.
What (else) I’m reading
The Stanford Cyber Policy Center released a white paper on the different approaches legislators have adopted / proposed to protect children from online harms. If you’re dipping your toe into the world of child safety legislation, this is a great overview to understand what approaches are currently being considered, and some of the tradeoffs.
TikTok released an empathetic and practical guide on how to share your story online while being mindful of how it might affect you or others. There’s great potential to incorporate these tips into companies’ community guidelines or product policy guidance.
Engine.is, an advocacy group for startups, released a comprehensive report on the impact that age verification requirements could have on small businesses. It outlines the different options available to verify age, and approximates what they would cost startups to implement.
Mental health is a key concern for Gen Z as they get ready to vote in the upcoming US election, with nearly every policy priority now including a mental health component.
Common Sense Media released a report on the state of kids and families in America, highlighting that “the negative effects of social media on young people's mental health is a top concern, including across party identification.”
*
We are two issues in and actively seeking feedback! Send your thoughts, feedback, or reach out if you want to work together on these issues.