Regulatory pressure forces big changes at tech platforms
Product roadmapping season is in full swing! If you are interested in building features or policies that protect and support young people, reach out to see how we can help.
We don’t have a project that we can publicly talk about this month, which made this issue one that we debated sending out - and eventually decided to delay by a week. While we’re committed to making information and resources about youth safety more accessible, the sensitive nature of this work also makes it challenging to sometimes share too much before it is public. That said, watch this space for the November issue, where we’ll be sharing some exciting work.
In other news, our last issue of Quire on how civil society can help gaming companies had our highest readership to date: 2,385 views! This feels like a huge milestone for our seventh post. Thank you, readers, for continuing to read and share this monthly youth tech policy brief.
Teen Accounts and Regulatory Pressure
Over the last two weeks, I’ve received a number of individual, media, and regulatory questions on what I think of Instagram’s announcement of Teen Accounts, given my previous role as the head of safety & wellbeing for the platform. I no longer work there so naturally I can only speak for myself, not my former employer.
As an outsider, the changes seem like a direct response to regulators in Europe and attorneys-general in the United States who have been doggedly demanding that the company move faster and do more to protect young people using its services. It is probably not a coincidence that the changes were announced on the same day as the US House of Representatives began to mark up the Kids Online Safety & Privacy Act (KOSPA) in the US.
Yet most of the announcements are extensions of existing features, rather than a substantive reevaluation of the teen experience on the platform. They do not directly address concerns around addictive features, inappropriate recommendations, and harmful amplification - key concerns among parents and policymakers alike. They do, however, significantly increase parental control over young teens’ time on the service.
Journalists like Casey Newton at Platformer and Naomi Nix at the Washington Post have written some thoughtful pieces about the regulatory pressures that likely drove Instagram’s changes and why it may not be enough. To make the announcement easier to assess, here is a quick analysis of some of the features and what has actually changed:
In summary, Instagram seems to have bundled a mix of pre-existing features and others they previously resisted into Teen Accounts. This shift highlights how even industry giants can find themselves playing catch-up when unprepared for evolving regulations and societal expectations around youth safety. As a team dedicated to helping companies anticipate these changes, we see Instagram as a compelling case study in how crucial it is for platforms to stay ahead of the curve in safeguarding young users online.
Some interesting engagements
I wrote for Tech Policy Press about the impact KOSPA might have on small and medium sized companies’ product roadmaps, and it might prove an interesting read. Larger companies with extensive resources will most likely be able to manage the additional requirements that KOSPA requires no matter what the final law requires of companies, but that’s not the same for more resource-constrained companies.
I spoke to WebPurify, a leading content moderation & review service, about how to protect young shoppers in e-commerce spaces (blog post). The longer eBook also features valuable statistics around children and teens’ retail behaviours online, with additional valuable insights from WebPurify’s VP of Trust & Safety, Alexandra Popken.
I joined Marketplace Risk New York to discuss child safety in the marketplace, helping companies think about how to design retail and e-commerce experiences that are age-appropriate by design, rather than as an afterthought.
I joined a stellar panel of child safety experts, practitioners, and youth activists at Ensuring a Safe Online Environment for Youth, hosted by All Tech Is Human. We discussed the challenges and opportunities around youth safety online as well as how solutions can centre children while respecting human rights.
What we’re reading
Spotify launched a pilot earlier this month for a parent-managed Premium account for children. It looks like the music streaming giant that 1 in 3 families listen to with their children has been doubling down on child safety in recent months. In May, they added child safety NGO Thorn to their Safety Advisory Council. In July, they announced that they were joining the Tech Coalition and released a short guide for parents of young users. As Spotify moves into the social space with its launch of short-form video that listeners can engage with, these developments seem like a way to get ahead of some of the known risks to children. There are still child safety issues to address; last year, a British MP spoke out about how an 11-year-old was enticed into sharing explicit photographs on the platform, and age-inappropriate content continues to be easily discoverable and shareable on most music platforms.
The war of words around social media’s impact on teen mental health continues. Jonathan Haidt, whose book, The Anxious Generation, kicked off the current high-profile round of debate around child safety, co-authored, with frequent collaborator Zach Rausch and LGBTQ+ activist Lennos Torres, a response to critics of the Kids Online Safety Act in The Atlantic. Amongst some more familiar arguments, they suggest that, even though some benefits may accrue to marginalised minors through use of social media, they are also disproportionately impacted by its harms. On the other side, a roundtable discussion hosted by the Child Mind Institute (video and summary) featured several scientists, including Pete Etchells and Candice Odgers, who have both publicly challenged Haidt’s conclusions in the past. The panellists agreed that there are risks from social media, but emphasised that impacts on each child will differ and there is no convincing evidence that it is a primary cause of mental health problems.
A positive trend in the public policy discourse about youth and online safety is that input from young people is being taken more seriously; the article and discussion linked above both give time to minors and their views. This report by the WeProtect Global Alliance (together with an office of the UN) from last October adds important background data from a global poll of children about their perceptions of and opinions about online safety. A recent article in EducationWeek discusses a pilot digital citizenship curriculum that goes far beyond instruction in personal online safety best practices and actually engages students in discussion of tech and society policy questions.
Debates continue about the merits of penalising big tech leaders for harms facilitated on their platforms, with X withdrawing personnel from Brazil and the UK’s Online Safety Act including individual liability under specific circumstances. Readers of Quire are likely aware that some of the worst abuse happens on less prominent platforms which appear to welcome, or at least turn a blind eye to, illegal activity. In recent weeks, Telegram’s CEO, Pavel Durov, was arrested in France and charged with offences that included complicity in the distribution of child sex abuse images. Telegram’s failure to cooperate with French authorities regarding CSAM appears to have been at the heart of the investigation and arrest. Relatedly, Michael Lacey, the founder of Backpage was finally sentenced to prison time and fined on a money-laundering count. The website was seized in 2018 and its executives were charged with offences including facilitating prostitution, allegedly including underage prostitution and child sex trafficking. Lacey still faces other charges.
On the more common issue of platform liability for harms to children, there have been two recent lawsuit developments. Platforms are almost never legally responsible in the US for the content posted by their users under the Section 230 liability shield. The Ninth Circuit Court of Appeals ruled that a case against Yolo can proceed, however, on the grounds that it misrepresented its terms of service when it claimed that it acts against abusive users but entirely failed to do so in the case of a teenager who was bullied on the app and died from suicide. Though this ruling also upheld an earlier decision that claims of harmful design against Yolo were impermissible, New Mexico has adopted this approach in a new lawsuit against Snap. One unusual aspect of this suit is that state investigators took a page out of civil society researchers’ playbooks by setting up a decoy Snapchat account for a fictional minor and observing outreach from seemingly predatory adults.
Perhaps surprisingly, in light of the aforementioned lawsuit, Snapchat is experimenting with allowing 16-17 year old users to post more public content, with some additional guardrails regarding privacy, content moderation, parental tools, and mental health. While this approach will no doubt be questioned by some, it is interesting to see thought going into the distinction between older and younger teen users.
As part of their continuing focus on image-based sexual abuse (IBSA), the White House announced a set of voluntary commitments from AI model developers and data providers to tackle AI-generated CSAM and NCII (non-consensual intimate imagery). These included measures taken in sourcing datasets and in preventing abusive outputs. The announcement was accompanied by the release of a set of IBSA principles developed by an NGO-led working group of civil society and industry representatives. For more detail on where we stand with AI-generated CSAM today, David Evan Harris and Dave Willner published the conclusions of their investigation into the models that are creating it and what can be done to prevent it.
Are you a company, nonprofit, or government interested in partnering with us? Learn more about what we do and get in touch for a complimentary consultation.