Bringing our biweekly LinkedIn news digest over to Quire
Quire is coming to you more regularly! Since the start of this year, we have been publishing biweekly news digests on our LinkedIn company page, providing more timely insights into the world of youth safety, privacy, and wellbeing. Beginning this week, we will cross-post those digests to our Substack, so that subscribers here can have easy access to these updates. These issues will supplement our monthly deep dives into specific projects or developments that we feel merit a lengthier conversation. Thank you for the thoughtful and engaged feedback over the last few months as we experimented with the right model!
Regulatory updates
Australia provides more clarity on its social media age restrictions. The Australian eSafety Commissioner has an update on the upcoming ban on the use of social media by under-16s, explaining the basic expectations on platforms and the enforcement powers of the agency. Notably, there is more clarity about the sequence of events to follow this year. The eSafety Commissioner will independently advise the Minister for Communications on services to be covered, following which the Minister may make legislative rules specifying services that are or aren’t covered.
Ofcom initiates enforcement programme for file-sharing and file-storage services with regard to child sexual abuse material (CSAM). As the enforcement of the UK’s Online Safety Act starts to roll out, Ofcom has informed file-sharing platforms of their new obligations and given notice that some of them will soon receive requests for information about their compliance with CSAM-related measures.
European Commission previews guidance on minor protection. At a public hearing hosted by a committee of the European Parliament, the DSA enforcement lead told legislators that age assurance and parental controls would be included in forthcoming guidelines.
NetChoice challenges Louisiana’s Secure Online Child Interaction and Age Limitation Act. The industry group is suing to block a new State law that seeks to require age verification and parent consent for minors to access social media, as well as restricting data collection and adults’ on-platform access to minors. NetChoice cites First Amendment, data security, age-appropriate advertising, and parental rights arguments in their suit.
Court rules that Constitution protects private possession of AI-generated CSAM. A US district court dismissed one of four charges against a man who used generative AI to create CSAM and shared it with a minor on Instagram. The ruling determined that while the production and sharing of the imagery was illegal, the man’s possession of it was not, because no actual child was depicted. The government is appealing the adverse ruling on the possession charge.
Governor of Querétaro presents initiative at local and federal levels to prohibit access to social media for children under 14 years of age [Spanish]. The Mexican politician is also recommending a ban on the use of cellphones in schools in his proposals, and has the support of at least one federal Senator.
Philippines human rights watchdog expresses support for proposed internet safety education bill. The proposed legislation, creating mandatory education program for elementary and secondary schools, is intended to complement a 2022 bill that targeted the production, distribution, and possession of CSAM. The Phillippines has been identified by UNICEF as “the number one global source of child pornography and a hub for the live-stream sexual abuse trade.”
Online Safety Assessment Report finds Social Media Services have safety measures in place for Singapore users but more needs to be done. IMDA, the Singaporean regulator, has released its first report assessing compliance with the Code of Practice for Online Safety for designated social media platforms. Under children’s safety measures, access to inappropriate content by minors is highlighted, in particular pornographic content on X.
South Korea’s approach to age assurance. A Tech Policy Press analysis of South Korea’s regulatory journey, contrasted with the more recent debates and efforts in Western countries over how to implement online age assurance. South Korea’s policies have evolved over more than two decades, starting with requiring government identifiers, then navigating court challenges on privacy grounds and implementing age-tiered online experiences, and now relying on regulator-designated mobile carriers and credit card providers for age verification.
Industry news
Instagram partners with schools to prioritize reports of online bullying and student safety. Instagram has announced a new expedited pathway for US middle and high schools to report violative posts and accounts that impact their students.
YouTube and industry leaders announce the Youth Digital Wellbeing Initiative. A set of commitments, agreed upon by YouTube and a group of content producers and distributors from around the world, that centers around empowerment, protection, and quality of content.
Character.AI has released Parental Insights, a new feature that gives parents reports about how much time their child has spent on the platform, and with which chatbots they have been interacting. The prominent recreational AI chat platform has been in the spotlight for child users’ safety after a 14-year-old died by suicide last year, seemingly with some encouragement from the Character.AI chatbot that he believed himself to be in love with. Challenges for the platform continue, as evidenced by a recent report that several user-created bots that mockingly imitate the deceased teenager were being hosted by the platform.
The Snapchat move that leaves teen girls heartbroken. A look into how a premium feature that enables users to see if their messages have been “half swiped” (i.e., someone has read their message without sending the signal that it was read) plays into the insecurities of some adolescents.
More than 110 child sextortion attempts reported each month to UK police forces. Noting the high frequency of reported sextortion attempts, the British National Crime Agency has rolled out an awareness campaign to reassure young victims and encourage them to seek support.
Research and civil society
Children & AI Design Code. Published by the the 5Rights Foundation, this code gives protocols for ensuring that children’s rights are being taken into account at each stage of the development and deployment of AI systems. Vys supported this work as one of the contributing experts around navigating children’s rights from a design perspective.
Children's wellbeing in a digital world. Internet Matters has released their annual index report, surveying British children and their parents. The report indicates growth in both positive and negative aspects of children’s online experiences and breaks out some data to focus on children with various vulnerabilities.
Detecting, disrupting and investigating online child sexual exploitation. A report from the Financial Action Task Force, an international body combating financial crimes, tackling sextortion and livestreamed abuse. It contains detailed research into the financial flows associated with these abuses and how law enforcement agencies can use financial data to combat them.
Balancing supervision & independence in the digital world. Google commissioned a study surveying and interviewing teens and parents in the USA, Brazil and Germany. The research explores attitudes towards parental supervision of online activity and concludes with both policy and product recommendations.
‘Literally just child gambling’: what kids say about Roblox, lootboxes and money in online games. University of Sydney researchers summarize a recent study where they gave children A$20 to spend on Roblox. Of particular concern to some participants were the complexity of converting real money into in-game virtual currencies as well as random-reward mechanisms. The research spans November 2023 to July 2024; we shared a number of youth safety measures Roblox introduced last December in a previous issue of Quire.
Teenagers exposed to 'horrific' content online - and this survey reveals the scale of the problem. A Sky News survey and focus group amongst teenagers in one city in the North of England highlights the prevalence of encountering inappropriate, and often unwanted, content in the course of the subjects’ normal online behavior.
Youth Select Committee publishes report investigating links between social media and youth violence. The Youth Select Committee, hosted by the UK Parliament and made up of young people, has considered a broad swathe of evidence and gives a set policy recommendations across a wide range of government functions. These include non-regulatory incentives for platforms to improve safety standards, further research, and opposition to a social media ban for under-16s.
Digital Exploitation: Mapping the Scope of Child Deepfake Incidents in the US. Encode, a US non-profit focused on AI harms, has produced an incident map tracking publicly known occurrences of deepfake nude images of minors in the United States.