Teen Accounts: Will Meta’s New Update Sufficiently Protect Young Users?

By: Meredith Garrity

On September 17th, 2024, Instagram announced the implementation of “Teen Accounts”, an update that addresses the growing concerns parents have about unsafe content their children consume.  This development comes as a response to the legal and public pressure Meta faces, culminating in litigation from over thirty U.S. states.  The lawsuit claims Meta’s algorithms are fueling a youth mental health crisis, evidenced by the company’s own research which shows adolescents experiencing increased symptoms of depression, anxiety, and body image problems linked to social media usage.  As Meta’s new update endeavors to place parents in the drivers’ seat of supervision, the update may be insufficient in tackling teens’ unlimited access to social media platforms.

In September 2021, The Wall Street Journal published the Facebook Files, an investigative report detailing the harmful impacts Meta platforms have on young users.  The publication exposed how top executives at Facebook knew the detrimental effects social media usage has on teens, yet they failed to address or remedy these findings.  The source behind the leaked Facebook reports was identified as Frances Haugen, a former product manager at Facebook.  Haugen released internal reports that found 13.5% of teen girls said Instagram worsens suicidal thoughts, 17% of teen girls saying Instagram contributes to their eating disorders, and 32% of teen girls stating that when they felt bad about their bodies, Instagram made them feel worse.

Haugen continued her quest to expose Facebook’s negligence, as she testified before the Senate Commerce Subcommittee on Consumer Protection, imploring lawmakers to impose stricter regulations on social media platforms.  Haugen describes Facebook’s algorithm, revealing  that posts receiving high levels of interactions are prominently spread through users’ feeds.  Facebook’s formula leads to a promotion of sensational content containing hate and misinformation, available for children on the platform to consume.  The harmful and inappropriate material prevalent on the social media platform has negatively impacted young users’ mental health, a fact Facebook executives were allegedly aware of, yet failed to address.

Haugen’s disclosure prompted more than thirty states to file a federal lawsuit against Meta, claiming the company violated various state consumer protection laws and the Children’s Online Privacy Protection Act (“COPPA”).  The complaint alleges that Meta has used “dopamine-manipulating” algorithms to maximize young users’ time spent on the platforms, deliberately making Instagram and Facebook addictive to teens.  The complaint further argues that Meta made false and misleading reports showing low instances of users experiencing negative impacts from using social media platforms, while Meta’s own research shows the contrary, alleging top Meta executives knew of the harmful impact on teens using social media, yet continued to neglect the issue to garner higher profits.

The legal action contributes to the ongoing discussion of social media’s impact on teens’ mental health.  Teens, specifically teen girls, experience increased rates of depression, anxiety, body dissatisfaction, disordered eating behaviors, social comparison, and low self-esteem when spending excessive amounts of time on social media.  While Meta continues to counter these assertions, claiming their sites work to provide a safe and positive experience, the tech company announced the introduction of ‘Teen Accounts’.  The update automatically places teens under sixteen into private accounts with protections limiting the content they consume and the people able to contact them.  Parents now have the ability to limit interactions, set time limit reminders, and enable sleep mode, allowing for heightened supervision of the content and communications teens experience using Instagram.

Although Meta’s update seems to trend in a positive direction for the protection of teens and the concerns of parents, there is a growing inquiry into whether the new features will be sufficient to deal with the detrimental content teens encounter.  Meta assures parents the age verification requirement will enforce the new update, however the ability of young users to evade this technology is obvious.  Instagram is still developing the use of AI models to identify user’s facial features in determining whether the representation of their age is accurate, however this feature has not been implemented yet, nor has it been shown to be accurate.  AI verification may seem like a powerful tool, but facial features are inherently unique, leaving a colossal margin for error, as teen circumvention of this technology appears inevitable.  Teens can easily lie about their age, as AI verification serves as the underdeveloped backbone of the entire premise of teen accounts.

Meta, by placing the controls in the hands of parents, absolves itself from responsibility, setting the entire burden of supervision within the family unit.  Parents undertaking this role of constant oversight could lead to tension on familial relationships, as they will have the ultimate say on who and what their teens see on social media.  Parents’ ruling on an adolescent’s ability to connect and communicate with their peers could increase tension between what parents deem appropriate and how teens’ want to connect online, as Meta relinquishes their responsibility for their role in protecting teens.

Although Meta’s initiative is a step in the right direction, there are alternative options to combat the harmful effects of social media on teen users.  Meta has the resources and the capital to create a framework that provides oversight, rather than placing the responsibility of monitoring teens on parents and families.  Meta can create a taskforce to filter posts by using specific algorithms to ensure harmful or inappropriate material is not transmitted to young users.  Meta can take a more proactive approach in protecting teens who use social media platforms, providing children with a uniform shield from unsafe content.

As Meta attempts to respond to the litigation and criticism surrounding their failure to disclose relevant data, the new update may not be sufficient in protecting teens.  Obstacles may arise as underdeveloped AI technology and the potential for familial unit push-back could hinder the effectiveness of Meta’s proposal.  Although the impacts of  Teen Accounts are yet to be known, the update will continue to encourage discussions about the impacts of social media on teens and how media platforms can play a pivotal role in protecting young users.

 

Student Bio: Meredith Garrity is a second-year law student at Suffolk University Law School. She is a staff member for the Journal of High Technology Law. Meredith received a Bachelor of Arts degree in Economics and Politics from Ithaca College in 2023.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email