Regulation, resources, and youth voices
A special issue featuring a youth leader's perspective on warning labels
This month, we’re excited to feature a youth leader’s perspective on the Surgeon General’s call for warning labels on social media. Chloe Kim is a high school senior based in Andover, MA and a youth leader at #GoodforMEdia, an initiative from the Stanford Center for Youth Mental Health & Wellbeing. She is passionate about teen mental health, especially concerning digital wellness. Chloe argues in her essay that while it can be hard to navigate social media, it can be an incredible tool for fostering meaningful connections and self-expression. We’re excited to support #GoodforMEdia’s efforts to help young people practice healthier ways of engaging with social media. You can read Chloe’s essay after the next few sections, or jump ahead.
Looking back and coming up…
We will be part of a few great panels at the annual Trust & Safety Conference next week in San Francisco. If you are interested in chatting about all matters youth, privacy, and safety, do reach out.
We recently joined an engaging livestream on the topic of social media warning labels hosted by All Tech Is Human, featuring Dr. Megan Moreno (Professor of Pediatrics and Affiliate Professor of Educational Psychology at the University of Wisconsin-Madison) and moderator David Ryan Polgar (Founder, President, All Tech Is Human). You can watch the full livestream at the link above and it was a great example of how civil society, industry, and academia can tackle these issues together.
Regulatory momentum continues to build for companies with youth audiences
The Federal Trade Commission banned anonymous messaging app NGL from hosting minors on its service, and made it return $4.5 million to customers whom it charged for a “pro” paid version of the app.. NGL promised to be a safe space for children and teens to “to share their feelings without judgement from friends or societal pressures” and had gained popularity with children and teens. Unfortunately, anonymous messaging platforms regularly see widespread bullying, and it appears that NGL’s design may have produced similar outcomes.
The most recent Tech Policy Press x YouGov field poll in the US finds widespread support for the US surgeon general’s warning label recommendation (65%), banning smartphones in school (51%), and banning data collection for targeted advertising to children and teens (59%). The detailed findings are a telling picture of where popular sentiment in the US lies relative to industry, academic, and civil society perspectives.
Game developer Tilting Point agreed to a $500,000 settlement with the California Attorney General and Los Angeles City Attorney’s offices for collecting and processing children’s personal information without consent, as well as serving age-inappropriate advertising to children. Tilting Point’s age screen was not found to be neutral and effective, nor did it obtain verifiable parental consent before collecting children’s personal data.
Illinois became the first state to pass a law requiring parenting influencers to set aside 15% of their earnings for any of their children who appear on camera. The law, championed by teen activist Shreya Nallamothu, would allow children to request the deletion of videos featuring them once they turn 18. France introduced similar laws last year, and other states in the US are likely to follow suit.
Apple announced that the App Store will list regional age ratings in Australia and South Korea, in addition to its global age ratings. This will significantly impact game developers seeking to attract players from these markets; as one example, games that include loot boxes will need to disclose this to the App Store, and Australia will automatically label them as 15+.
Significant company news around youth safety
A day after NCMEC and Thorn released a new report finding that Snap was one of the two top platforms for sextortion, Snap published a blog post outlining new features to protect its community from sextortion. Most of the measures already existed across other platforms, save one: Snap will now warn teens when they receive a chat message from someone in a region where the teen’s network isn't typically located.
A new media investigation in India found that Instagram accounts that post sexualised images of Indian children lead viewers to Telegram channels, where people sell child sex abuse content for anywhere between Rs 40 to Rs 5,000 (USD $0.50 - $60). While encrypted messaging can be a valuable tool for youth privacy, they need to be paired with safe recommendation surfaces. We spoke to the Wall Street Journal a few months ago about how pairing unsafe recommendation surfaces with end-to-end encrypted messaging is a “recipe for disaster”.
Twitch disbanded its Safety Advisory Council and replaced it with 180 Twitch Ambassadors, a community of streamers who are active on Twitch. The Council was a mix of streamers, moderators and outside experts with expertise in online harassment, but the company says that its Ambassadors will have more lived experience of using the platform and understanding the issues that the community faces. It feels like the end of an era, but in line with tech companies increasingly cutting back on external advisory engagements.
New resources for product and policy teams
Game developers should be poring over the new Digital Thriving Playbook from the Thriving in Games Group (TIGG) and the Joan Ganz Cooney Center at Sesame Workshop. It offers a variety of design tactics to cultivate key markers of wellbeing like belonging and inclusion, and disincentivize disruptive behaviour like cheating, fraud, or inappropriate sharing.
Researchers at Virginia Tech are building AI-assisted chatbots to simulate interactions between predators and children. The chatbots will be used in educational programs teaching 11-15 year old children how to avoid sexual predators online. It is one of several new promising National Science Foundation projects to take on the ethically complex task of teaching children how to recognise signs of sexual grooming or predatory behaviour.
Roblox released an open source voice safety classifier and made it available for commercial use. This is the first publicly available ML classifier to detect nearly real-time violations happening in voice chat. It’s a powerful tool to make available to the larger community and should accelerate the development of more sophisticated voice safety classification models.
Business Insider featured insights from Jigsaw, on how Gen Z teens and young adults consume information online. While the research itself is solid, the article was disappointingly breathless about Gen Z’s susceptibility to misinformation (for example, expressing alarm that Gen Z turns to “like-minded, trusted influencers” to determine what is true, an entirely human quality).
Social Media Needs Reform, Not Labels: A Youth Perspective
Chloe Kim
On Monday, June 17th, U.S. Surgeon General Dr. Vivek Murthy called for social media warning labels stating that “social media is associated with significant mental health harms for adolescents." Murthy cites evidence from tobacco studies, which show that the use of warning labels helped reduce tobacco use. But how useful is this comparison?
As a teen, I certainly don’t think of my phone as a cigarette. There have been countless instances in which I’ve spent hours in a scroll blackhole; I have experienced the pitfalls of social media and recognize that there is much that needs to change. However, apps like Instagram have also enabled me to express myself, connect with friends, and explore new hobbies. In fact, like me, other teens are more likely to say that social media has a more positive than negative impact on their lives.
Unlike tobacco, which is harmful even when used as intended, social media is not inherently dangerous. Social media’s impact on youth depends on how we’re spending our time online, necessitating an approach that centers on reform, not restriction. By equating social media to cigarettes, the Surgeon General seems to favor restricting use over improving platforms for young users.
Many teens already understand that social media can harm our emotional health. Given this, I am skeptical about how effective a re-statement of this general sentiment will be in keeping youth off social media. A more productive use of the label might be to offer strategies for mindful engagement: what do we want adolescents to keep in mind as they scroll? Seeing the label as an opportunity to build digital literacy – not further fear mongering – would better support teens’ relationship with social media.
Or perhaps the whole idea of a label falls flat. Warning labels may give the user the impression that any harm from social media is their own fault, offering a mere “I-told-you-so” explanation for why these platforms are causing distress. In doing so, these labels would shift responsibility away from platforms and policymakers, ultimately distracting from the reform that is sorely needed.
Having been affected most by social media, young people are in a unique position to drive reform and are eager to take initiative. For instance, I and my fellow youth leaders of #GoodforMEdia work to support healthy engagement with social media through youth testimonials, workshops, and resources designed by teens for teens. We also share our practical policy and product ideas directly with platforms and lawmakers and wish we had even more opportunities to have our ideas heard.
From my own experiences with navigating social media, I feel strongly that making the digital landscape safer and equipping youth with the tools to protect their mental health online is the most effective way to encourage healthy social media use. Rather than pursuing a confusing label, the Surgeon General should create space for youth to help shape a safer digital landscape. Only then can we improve social media to the point that no warning labels are needed.