By: Caroline Foster
With the election on November 3, 2020, social media platforms took actions to decrease the spread of false information that might taint votes. The social media companies took these measures to avoid a repeat of the problems in the 2016 election where Russian disinformation went unchecked on Facebook, Twitter, and YouTube.
In particular, Twitter took measures to address concerns that false information could lead to instability. Twitter took steps “to slow the way information flows on its network” by changing some of the platform’s features. One feature Twitter added is that it will give users a timeout before they can retweet a post from another account, and if users try to share content that Twitter has flagged as false, a notice will warn them that they are about to share possibly false information. Twitter executives noted that “Twitter has a critical role to play in protecting the integrity of the election conversation, and we encourage candidates, campaigns, news outlets, and voters to use Twitter respectfully and to recognize our collective responsibility to the electorate to guarantee a safe, fair and legitimate democratic process this November.”
Similarly, Facebook claimed it would temporarily restrict political ads one week before Election Day and it would take action if candidates and parties make premature claims of victory on the platform. Further, Facebook claimed it would prohibit all political and issue-based advertising after the polls close on Election Day for an undetermined length of time in addition to placing notifications at the top of the News Feed to notify users that no winner had been decided until a news outlet announces the winner.
Google also took measures by banning political ads after the election to allow time for the results to be tallied. During this time, advertisers will not be able to run ads that reference candidates, the election, or the election results. A spokesperson for Google noted, “Given the likelihood of delayed election results this year, when polls close on November 3, we will pause ads referencing the 2020 election, the candidates, or its outcome. This is a temporary measure, and we’ll notify advertisers when this policy is lifted.”
Senator Mark Warner wrote letters to these social media companies urging the companies to take action against misinformation ahead of the election. In the letter, Warner noted that he writes “to again urge you to implement strong accountability and transparency standards in the context of our nation’s election” because Google and Facebook remain “vector[s] for disinformation, viral misinformation, and voter suppression efforts.” In a bill proposed by Warner in 2017, the bill would apply the same requirements on digital political advertisements that exist for TV and radio political ads, along with requiring political advertisers to identify themselves. The lawmakers proposing the bill were attempting to prevent foreign actors from influencing U.S. elections by certifying that political ads sold online are covered by the same rules as ads sold on TV, radio, and satellite.
In taking these measures for the election, do these measures violate the First Amendment right to freedom of speech? The First Amendment provides that “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press, or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.” However, some people believe that the First Amendment should not be an unlimited right.
One law professor noted that instead of protecting the speech of “radicals, artists and activists, socialists and pacifists, the excluded and the dispossessed,” the First Amendment now serves “authoritarians, racists and misogynists, Nazis and Klansmen, pornographers and corporations buying elections.” This switch in whose interests the Amendment serves creates questions as to its main goals. However, social media platforms, being private platforms rather than part of the government, are not governed by the First Amendment since they deal with unprotected speech. Therefore, these platforms are allowed to restrict free speech, allowing them to censor what people post. However, in censoring, these companies can do so subjectively and unevenly.
These platforms took measures like fact-checking and labeling since before the election, but some people believe these measures do the opposite of their intention. One professor noted that adding a weak label to a post by President Donald Trump has the effect of “giving it an attention bump by creating a second news cycle about Republican charges of bias in content moderation.” Further, Facebook and YouTube have treated political ads as protected speech in the past, allowing them to include false and misleading information.
While Republicans have claimed bias in what social media platforms choose to target as misinformation, it will be imperative in this election that the platforms do not target one party over the other, or else there will likely claim of an unfair election. Since the First Amendment does not apply to private companies, the companies can choose how and what to censor. Due to this freedom to censor, the platforms should be able to enforce their policies for the election without First Amendment infringement claims. However, if they do not censor both the Democrat and Republican parties under the same rules, these policies will do more harm than good.
Bio: Caroline Foster is a second-year law student at Suffolk University Law School. She currently serves as a staff member on the Journal of High Technology Law. Caroline received her Bachelor of the Arts from Bucknell University, double majoring in English Literature and Philosophy.
Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.