Online child safety laws could help or hurt – 2 pediatricians explain what’s likely to work and what isn’t
- Written by Megan Moreno, Professor of Pediatrics, University of Wisconsin-Madison
Society has a complicated relationship with adolescents. We want to protect them as children and yet launch them into adulthood. Adolescents face risks from testing out independence, navigating peer relationships, developing an identity and making mistakes in these processes.
Today’s teens have new areas of risk and opportunity[1] as they navigate the digital world, and this has led to debate over their social media use.
Concern about social media use by 13- to 17-year-olds has led to a patchwork of state initiatives as well as proposed federal legislation. Following the Surgeon General’s Advisory on Social Media and Youth Mental Health[2], issued on May 23, 2023, the Biden administration convened the Kids Online Health and Safety Task Force[3].
We are pediatricians who[4] study child online behavior[5], and we are co-directors of the American Academy of Pediatrics Center of Excellence on Social Media and Youth Mental Health[6].
As we consider the role of the federal government in regulating teen social media use, we believe it is important to consider how to support adolescents’ drive for independence and social interactions[7], while protecting them from serious harm or having their identities commodified by powerful technology companies[8].
Without commenting on any specific piece of active legislation, here are the elements of any potential policy related to children and technology that we believe would be helpful, and those we are concerned could be harmful.
Ideal legislation
Key to any effective online child safety legislation is accountability, so that platforms are designed with the needs of children and adolescents in mind, rather than being driven by engagement and revenue goals.
Default privacy protections are also crucial. Young people often receive[9] – and don’t want – contact from unknown adults. These are typically marketers or random strangers, dubbed “randos.” Teens often teach each other ways to try to be safe[10], leading to widespread practices that may or may not be effective.
Methods for stopping online child sexual exploitation are not adequate[11], and elements of proposed legislation could help by limiting who can contact teens outside of their known social circles. Making young users’ accounts private by default would allow them to have online interactions just with friends and communities they seek out. Encouraging collaboration among technology platforms to flag social media users who pose a threat and identify problematic practices is also crucial.
Another helpful element of child online safety legislation is requiring better access to and control over platform settings. One challenge for social media users of all ages is to find and navigate the different available settings. These could be standardized to be readily accessible rather than requiring multiple clicks to find protections buried in an app’s settings. Young people describe wanting more control[12] in their platform use, including the ability to control their content, reset or update their algorithms, and delete data or accounts.
Prohibiting data collection from young people would also help. Behavioral data from digital breadcrumbs reveals a lot about users, which allows technology companies to sort them into categories to predict what they might buy or click on next. This practice is unethical because it can be used to exploit susceptibility to self-harm[13] and low impulse control. It also is incompatible with the adolescent development ideal of exploration[14] – teens are supposed to test things out, push boundaries and change. Teens are harmed when apps and sites nudge them in particular directions in order to profit from them.
Legislation could also require technology companies to take user-reported problems more seriously. The companies could make clear the process for reporting problematic content or people, and what steps they will take after a report. Anecdotally, we have both heard in our pediatric clinical practices that teens don’t make these reports because they don’t trust that anything will happen in response. There are several possible approaches, including direct reporting to platforms as well as designating an intermediary[15] to receive reports about problematic interactions on platforms.
Legislation could also focus on limiting the impact of misinformation. Misinformation is another problem teens encounter that is likely to grow with generative artificial intelligence. Platforms could mandate watermarking of AI-generated content. Platforms could also prevent the spread of untrustworthy content by identifying super-producers and applying rate limits so that they can’t clog everyone’s feeds.
The federal government could also fund additional research. Despite the past decade of prolific social media research, there remains a lack of common data formats, metrics to measure key concepts, and interventions to promote well-being. Funding to support research, including projects that include investigators from government, academia and industry, should lead to progress and innovation in this area.
Finally, legislation could help advance age verification. To enhance protections for adolescents, platforms need to know if a user is a young person. Age assurance and age verification are complicated topics that researchers, policymakers and technology developers are studying to determine how to accomplish it without compromising privacy. One option could be a new setting that allows a device to indicate to platforms, browsers and apps what age range the user is in and implement age-appropriate protections for young users.
Legislation that would be harmful
Requiring parent permissions would be harmful. This restrictive approach would limit access to safe places for many young people[16] and exclude teens who are in unsupportive family settings. These approaches also put the burden on parents to be gatekeepers for every decision about platform access, which has the potential to increase family conflict.
Shutting down particular social media is also problematic. Singling out individual platforms does not address the systemic revenue-driven designs and business models[17] that exist throughout the industry.
Thirteen is a common minimum age for social media platforms. Imposing age limits from 13 to 16 would also not be helpful. This proposal is not supported by clear evidence about what age range is best for all teens. It is developmentally appropriate for 13-year-olds to want to connect with their peers online.
Adolescents themselves support needing to meet developmental milestones[18] to be allowed to use social media, and they acknowledge that individual teens may meet these at different ages. In other words, some teens have no problems at age 13, while others will continue to have problems with social media at age 17. Age restrictions may serve to distract from making sure platforms are following guidelines and best practices for all ages.
Limits of legislation
Young people often navigate online interaction with little help from adults. There’s a need for additional approaches to engage, educate and involve parents – and other adults who work with and care for young people – in supporting young people as they enter the online world.
There are numerous other critical areas of work, including bullying, mental health and parent burnout that need separate consideration. These areas are likely to need distinct policy approaches. But policy alone is not likely to solve all of these complex, intertwined issues that intersect in the digital world.
Moving forward
Legislation is a powerful approach to increase safety for young people online. It is important to recognize that teens themselves, as super-users in these spaces, have thoughtful ideas of their own about possible legislative and design elements to enhance their safety[19].
Families and adults who work with youth also need resources to better support adolescents. The Center of Excellence on Social Media and Youth Mental Health[20] seeks to provide those resources through a Q&A portal, ongoing learning opportunities and resources.
Finally, adults must also be accountable for their own social media and technology use. Many teens report that parents’ social media use distracts[21] from parent-child interaction and that adult social media use negatively affects them. To support young people, adults should model appropriate online behavior[22] – including being able to set their own phones down to be present for the critical, often tumultuous, yet amazing stage of their adolescents’ development.
References
- ^ new areas of risk and opportunity (www.nationalacademies.org)
- ^ Surgeon General’s Advisory on Social Media and Youth Mental Health (www.hhs.gov)
- ^ Kids Online Health and Safety Task Force (www.samhsa.gov)
- ^ pediatricians who (scholar.google.com)
- ^ study child online behavior (scholar.google.com)
- ^ Social Media and Youth Mental Health (www.aap.org)
- ^ adolescents’ drive for independence and social interactions (doi.org)
- ^ commodified by powerful technology companies (doi.org)
- ^ often receive (www.pewresearch.org)
- ^ teach each other ways to try to be safe (doi.org)
- ^ child sexual exploitation are not adequate (www.gao.gov)
- ^ wanting more control (www.goodformedia.org)
- ^ self-harm (doi.org)
- ^ adolescent development ideal of exploration (theconversation.com)
- ^ designating an intermediary (commission.europa.eu)
- ^ safe places for many young people (doi.org)
- ^ revenue-driven designs and business models (theconversation.com)
- ^ needing to meet developmental milestones (doi.org)
- ^ to enhance their safety (designitforus.org)
- ^ Center of Excellence on Social Media and Youth Mental Health (www.aap.org)
- ^ parents’ social media use distracts (doi.org)
- ^ should model appropriate online behavior (doi.org)
Authors: Megan Moreno, Professor of Pediatrics, University of Wisconsin-Madison