The US surgeon general calls for warning labels on social media
The pitfalls of scaring consumers instead of developing better rules of the road
We had an entirely different issue in mind, but yesterday’s NYT opinion piece by Dr Vivek Murthy, the US surgeon general, had different plans in store for us. In his first recommendation since issuing a social media health advisory last year, Dr Murthy called for Congress’s support in imposing warning labels on social media platforms, similar to those on tobacco products. I joined a live BBC broadcast with Christian Fraser to discuss the merits and drawbacks of this idea.
In my conversation with the BBC, I noted that it is more appropriate to compare social media to cars than to tobacco. Like cars, social media can be incredibly valuable for youth when designed well. It helps them build community and some of the most powerful youth activism today is fuelled by the scale and visibility social media provides, Tobacco, on the other hand, has no such benefits, and grouping it with social media ignores the value that the latter adds to young people’s lives.
Yet the surgeon general’s anxiety about social media is understandable. His comments reflect a polarising debate in the United States about the role social media plays in children’s wellbeing, underlined by a lack of public trust in tech platforms. He didn’t make his recommendation in isolation; he also called for more legislative action, for companies to transparently share data with independent scientists, and for schools to limit smartphone use on campus. Some of the thinkpieces right after his essay came out unfairly ignored these other recommendations, which is unfortunate, and made his essay seem more stark than it was.
The more concerning aspect of this conversation, however, is that we once again seem headed towards assigning responsibility to consumers rather than companies. Warning labels on social media would pressurise parents to aggressively monitor their children’s use of social media. They wouldn’t necessarily lead to any changes in how social media companies actually design their products. The resulting surveillance-heavy environment would significantly risk children’s privacy and agency, critical aspects of their wellbeing as they grow into adults.
How we can promote responsible product and policy development
Encouraging companies to build better products by providing clear and consistent standards for responsible design would be more effective than putting pressure on parents to monitor children even more than they already do. This is not merely an opinion; there is now solid precedent for how we can encourage responsible design.
In the last four years, companies have done more for youth wellbeing than they did in the fifteen years prior. Most major platforms now default new teen users into private accounts, limit targeted ads based on their personal data or on-app activity, and are starting to more aggressively address how they recommend potentially harmful content. The key driver of these changes wasn’t shaming techniques like warning labels; it was thoughtful, future-proof regulation coming out of the EU and the UK. The more US policymakers can emulate that, as they’re currently doing in several states, the better we can protect young people’s wellbeing.
These frameworks can also support innovation and entrepreneurship around safety and wellbeing. To return to the car analogy, the companies that built cars did not build the first car seats for children; other innovative companies recognised the need in the market and provided solutions. I would love to see us start to similarly see child safety and wellbeing as an opportunity for innovation.
if good data and regulation can establish the expectations we as a society have of our platforms, new players can come in to innovate on how best to support those expectations. As someone who led trust & safety efforts within tech companies for a long time and now advises them on how to build better products, I can confirm that there are committed, mission-driven folks within companies who are looking for more guidance on what to prioritise building.
In other news: I’m in Brussels this week for a series of client workshops and am looking forward to a week full of vigorous debates and disagreements. One of the reasons VYS came into existence was to support more thoughtful and informed discussions around children’s rights online, which naturally lends itself to a wide range of opinions on the best ways forward. It also helps that the weather forecast is resolutely gloomy for the whole week; all the better to huddle around conference tables and really dig into some of the thornier issues around children’s safety.
What else I’m reading
The National Institute of Standards and Technology (NIST) published a study evaluating six age estimation software technologies, its first such study in over a decade. Improved technology has reduced the error margin from 4.3 to 3.1 years, and NIST expects this gap to narrow with the rapid development of AI models. This is a significant development for policymakers that have been advocating for more rigorous age assurance online, as well as for tech platforms and industry groups that have been concerned about the risk of false positives.
This primer on child safety online legislation contains several significant areas that I disagree with, but is a useful guide to understanding the range of legislation around children’s safety and the arguments that are raised. While the primer is a collaboration between several significant academic institutions, it has too many general assertions that are not backed by evidence (one example: attributing all recent attempts at legislation to societal moral panics without critically considering the safety and wellbeing concerns that have been raised).
On a lighter note, the French and Korean data protection authorities partnered on this charming poster to better inform young people about how they can protect their privacy online. The poster provides accessible, age-appropriate descriptions of the right to access, correct, object to, transfer, or erase information about yourself online.
I love this