Balancing Free Speech and Preventing Hate Speech in the Digital Age

By: Allison Nickerson

 

Social media allows everyday people to freely practice their First Amendment rights by sharing their ideas and beliefs. While these platforms allow for free expression, healthy debate, and discourse, there is a limitation to the conduct that is allowed. This limitation existed from the inception of social media and is present in the form of community guidelines and user agreements of all social media platforms. In the recent political and social climate, social media guidelines and censorship on these platforms have become a highly charged issue, as can be exhibited through the discourse involving the 2020 election. Social media companies are fighting to find the delicate balance between the protection of the First Amendment and preventing hateful and potentially violence-inducing speech. This balance was first addressed in Section 230 of the 1996 Communications Decency Act, which protects platforms from liability for harmful content posted on their site by third parties. Additionally, it allows the platforms to police their sites for harmful conduct as the company feels appropriate, but doesn’t require them to remove anything and protects against liability for that choice.

Elon Musk, who is an outspoken proponent of free speech and calls himself a “free speech absolutist”, purchased Twitter (now known as X) in October of 2022. Almost immediately after his acquisition, he began reforming the site with the goal of combating spambot accounts and promoting a balanced approach to free speech. He began his reforms by rolling back content moderation, which put in place a system to target misinformation, hate speech, and offensive posts. Overall, this feature sensors online content to make it more palatable to users, so when he reduced content moderation, he was heavily criticized for this choice by many sources, one of which was the Center for Countering Digital Hate (CCDH). CCDH is a non-profit organization whose mission is “to protect human rights and civil liberties online… CCDH holds [social media platforms] accountable and responsible for their business choices by highlighting their failures, educating the public, and advocating change from platforms and governments to protect our communities”. CCDH claimed that “[h]ate and disinformation have skyrocketed on Twitter since Elon Musk took over the platform” and supports this claim by publishing a report with studies showing that 86% of the hateful posts they reported were still up one week after they were reported. Because of the report, many of X’s advertisers began pulling off the site to prevent their ads from running next to potentially hateful content. Musk fired back arguing that CCDH is seeking to prevent public dialogue and free expression in a way that favors their ideologies.

 

Furthermore, on July 31, 2023, Musk filed a lawsuit against CCDH in the Northern District of California, alleging that CCDH breached contract, violated the Computer Fraud and Abuse Act, intentionally interfered with contractual relations, and induced breach of contract. These claims come from Musk’s position that CCDH is an “activist [organization] masquerading as research agencies” and they “cherry pick[ed]” posts to give the appearance that X is flooded with hate speech and harmful content. Musk asked the court for a trial by jury and an unspecified amount of damages sufficient to compensate X for damages sustained as a result of Defendants’ actions. The case is still awaiting further litigation and there is a conference scheduled for early November.

CCDH is now fighting back against Musk. In a recent statement criticizing Musk, they explain that they believe he is silencing anyone who disagrees with him. They continued explaining they think he is the one who created the toxic content on X, so he should not be pursuing them legally for studying and reporting on the environment he created. They are now gathering support and funds to aid in their legal defense against the tech giant. CCDH currently has 64 “experts” ranging from non-profit advocacy groups to university professors standing with them in their fight against Musk.

 

While there is no decision from the court on this matter yet, it is clear that legislation needs to be implemented to find the balance between First Amendment protections online and keeping the internet free from hate speech and inciting violence. It appears this issue is not going away, in fact it is growing, as seen in subsequent litigation by Musk against California for their content regulation bill (Assembly Bill 587). This bill would require platforms “to issue semiannual reports that describe their content moderation practices, and provide data on the numbers of objectionable posts and how they were addressed”. Musk opposes this bill because he believed the true intention was to pressure platforms to eliminate content the state found objectionable. However, due to the recent influx of legislation and litigation on the First Amendment and social media responsibility, steps are being taken by both governmental and private entities to conserve this constitutional right.

 

Student Bio: Allison Nickerson is a second-year law student at Suffolk University Law School. She is a Staff Writer for the Journal of High Technology Law. She graduated from North Carolina State University with a Bachelor of Arts in Political Science.

 

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email