Is it Internet Freedom or just Shielding Internet Companies from Liability?

By Shree Chudasama

How is it possible that if someone posts seemingly obscene content on Twitter or Facebook, you can sue the individual and not the platform? The answer lies in Section 230 of the Communications Decency Act. Prior to its passage in 1996, case law clearly established that publishers of content should be aware of the material they published and were thus liable for any illegal content. The law distinguished between published and distributors of content. Distributors of content were more likely to be unaware of the content itself because they simply disseminated the papers or item, and therefore were not held responsible for the distribution of illegal content. In the age of the internet, the line between publishers and distributors of materials started to blur, requiring a change in legislation to keep up with societal changes.

Consequently, Section 230 of the Communications Decency Act materialized. As it stands, the law protects purports to protect consumers and platforms as well as free expression generally. The legislation includes protection for “good Samaritan[s]” in blocking or screening offensive material. The law states in relevant part that, “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Beyond the few exceptions for criminal law or intellectual property matters, this has come to mean that the law essentially protects third-party platforms, like YouTube or Twitter, from being held responsible for the statements or content of its users. By having such a broad meaning that is widely applicable to online platforms, Section 230 has had a big impact on the internet’s functionality and has shaped the internet as we know it today. The development and application of Section 230 have resulted in two distinct camps: those for the law’s widespread protections as supplements to free expression and those against its broad language that could give exorbitant power to major internet corporations and their platforms.

Those that commend the passage of the law argue that Section 230 has allowed free speech and innovation to flourish online. In reviewing congressional intent behind the passage of the law, those favoring its passage point to legislators’ desire to give computer service providers leeway from liability. Without Section 230, it’s likely that these providers would need to implement restrictive efforts moderating the types of messages posted to ensure nothing illegal went up, but this limitation would also legal speech and expression. The law’s protection also applies to bloggers who are not held responsible for the comments posted in reaction to their content. Some academics even content that the First Amendment requires Section 230.

The opposing camp to Section 230 points to instances where the wide protections of the law allows harassment or sexual abuse to exist freely. Because of the immunity included in the act, many internet companies do not go after content that highlights child sexual abuse because there is little to no liability on them, thus no incentive to stop the content from spreading or finding its source. Practically speaking, it would be somewhat impossible for online computer service providers to stop objectionable content from popping up on their platforms. Opponents of the law also note that Section 230 significantly prevents successful defamation actions when content is not properly moderated.

Currently, it seems the government is also taking another analytical look at the impact of Section 230. In February 2020, the Department of Justice held a workshop discussing the scope of Section 230. The discussion sought to determine if changes to the law should be made based on whether it nurtured innovation or fostered unaccountability. This workshop was also a part of an antitrust review of big tech companies, but focused on the posting of content relating to non-consensual pornography, harassment, and child sexual abuse.

As recently as March 2020, a group of senators introduced the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020 (EARN IT Act of 2020), to create a governmental commission tasked with defining the guidelines for handling online content related to child sexual abuse and exploitation. The senators, including Lindsey Graham (R-South Carolina), Richard Blumenthal (D-Connecticut), Josh Hawley (R-Missouri) and Dianne Feinstein (D-California), sought to introduce bipartisan legislation that would have tech companies “do better” in quelling online child sexual exploitation. They note that Section 230 gives these interactive online computer services have immunity from civil liability and some state criminal liability for third-party content on their platforms, so they do not aggressively quell child sexual exploitation.

The Justice Department’s endorsement of the EARN IT Act shows that the conversation into Section 230 is ongoing. While this probe into big technology companies’ power continues, it’s likely that the current administration may seek to enhance limits to Section 230 to protect vulnerable populations. However, there has been little discussion about the consequences of these changes on how the internet can be used (the impact on encryption) or how the changes will be implemented (will prosecutors get more power?). Despite its lackluster name, discussions on Section 230 of the Communications Decency Act will likely have big impacts on how we use the internet today.

 

Student Bio: Shree is currently a second-year law student at Suffolk and a staff member of the Journal of High Technology Law. She graduated from Boston University with a Bachelor of Arts degree in international relations and a minor in business administration.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

 

 

 

Print Friendly, PDF & Email