Are Social Media Algorithms Promoting Harmful Conduct?

By: Katie LePage

As you’re lying in bed scrolling through social media, do you ever wonder how the pair of shoes you googled last week appeared in your Instagram feed?  Well, that is because  platforms, such as Instagram, have embedded algorithms in their systems to identify user’s interests and provide them with similar suggestions that will hold their attention.  While it might appear that algorithms are beneficial to society, they can actually be a dangerous tool for internet platforms to promote information that is harmful or misleading, and platforms avoid liability under Section 230 of the Communications Decency Act of 1934In early October, the United States Supreme Court granted certiorari in a case that seeks to address whether the protections under Section 230 extend to an internet platform’s use of algorithms.  This will be the first time the Supreme Court will interpret Section 230 since its enactment in 1996, which may have significant impacts on how internet platforms generate content.

In 1996, the Communications Decency Act of 1934 was reconstructed to include Section 230, which essentially provides immunity to online service providers for any content published on their platform, subject to certain exceptions such as criminal and intellectual property law.  Section 230 has been commonly characterized as one of the most important provisions for protecting free speech on the internet.  Section 230 is colloquially referred to as the “26 words that created the internet,” stating that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”  Section 230 has served as a vital provision for many internet platforms because they can allow individuals to post content such as hate speech, which is protected under the First Amendment, on their platform without being legally liable.

Although Section 230 was passed to promote the development of the internet and to protect an individual’s right to free speech on the internet, in recent years, Section 230 has become a topic of extreme controversy.  The United States Supreme Court granted certiorari in Gonzalez v. Google LLC, where Nohemi Gonzalez, a 23-year-old American exchange student, was murdered by three ISIS terrorists in November of 2015.  Gonzalez was one of 130 people murdered after ISIS facilitated a series of bombings and shootings in Paris, France.  Gonzalez’s family sued Google under the Anti-Terrorism Act, alleging that YouTube, Google’s subsidiary, assisted ISIS in spreading their message by using algorithms to recommend ISIS-related videos to users, further facilitating users’ ability to locate other content and accounts related to ISIS.  While plaintiffs concede that Google is awarded the protections under Section 230 by allowing ISIS to post their videos on YouTube, they seek to address the question of whether recommendations provided by the platform’s algorithms qualify for immunity under Section 230.

While the United States Supreme Court is scheduled to hear oral arguments early next year, with a ruling expected before July 2023, it is abundantly clear that there is a desire to reform Section 230, regardless of the Court’s decision.  Algorithms are a powerful tool for internet platforms because they leave digital trails that allow companies to promote targeted advertising based on a user’s engagement with certain content.  While there are some benefits associated with platforms’ use of algorithms, they also have the potential to impose significant harm on society.  The purpose of algorithms is to continue supplying content that captivates the user’s attention, regardless of if it is factual or truthful.  Many critics of Section 230 believe that it provides platforms with far too much leeway in how they handle matters like COVID-19 misinformation, livestreamed crimes, and hate speech.

The dangers associated with algorithms were recently amplified on October 9, 2022, when Kanye West, who has 31.7 million followers on Twitter, tweeted anti-semitic statements on the platform.  Even though the tweet has since been deleted, Twitter remains immune from liability under Section 230 for providing West the platform to broadcast such atrocious ideas.  However, the platform’s use of algorithms will continue to promote similar content to users who expressed interest in the tweet and could encourage recruitment.  After the tweet was posted, West gained 180,925 new followers and various extremist groups hung banners and distributed flyers expressing their support of West’s remarks on Twitter.  Since West’s tweet, expression of anti-semitic rhetoric has been amplified through algorithms, which could potentially lead to anti-semitic violence.

While it is unclear how the Supreme Court will rule in Gonzalez v. Google LLC, it is apparent that algorithms are a dangerous tool that platforms can use to furnish users with additional information that is harmful and misleading.  Although internet platforms are immune from liability by hosting information provided by third parties, protection under Section 230 should not extend to platform’s use of algorithms to determine what users should view next.  Internet platforms that use algorithms to recommend harmful content to users no longer fall under Section 230’s definition of “publishers,” they have transformed into “developers” because they are promoting content based on a user’s interests and therefore should not be awarded the protections under Section 230.  If the Supreme Court determines that algorithms are not protected under Section 230, internet platforms will be forced to make significant alterations in how they provide content to users to avoid liability, however, such an adjustment is seven years too late to have saved Nohemi Gonzalez’s life.

 

Student Bio: Katie LePage is a second-year full time student at Suffolk University Law School.  She is a staff writer on the Journal of High Technology Law.  Katie is a graduate from Stonehill College, where she received a Bachelor of Arts Degree in Criminology, with a minor in Sociology.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email