Assessing the Legality of Facebook’s Advertising Analytics

By Jennifer Schmidt

While speech is protected by the first amendment, commercial speech receives much more scrutiny than other forms. The Federal Trade Commission has been entrusted with the discretion to bring claims against companies who utilize “unfair or deceptive acts or practices in or affecting commerce.” 15 U.S.C. § 45(a)(1). Furthermore, in Central Hudson, the Supreme Court set out a four-part test for assessing government restrictions on commercial speech:

[First] . . . [the commercial speech] must concern lawful activity and not be misleading. Next, we ask whether the asserted governmental interest is substantial. If both inquiries yield positive answers, we must determine whether the regulation directly advances the governmental interest asserted, and whether it is not more extensive than is necessary to serve that interest.

Central Hudson Gas & Elec. v. Pub. Service Comm’n, 447 U.S. 557, 566 (1980). The Central Hudson test prevails to this day. These restrictions clearly establish a governmental interest in protecting consumers against unlawful or unjust advertising practices and preventing discrimination. While the Civil Rights Act of 1964 ended segregation in public spaces, targeted advertising allows marketers to do just that to this day in the public forum of social media.

Facebook reigns as the second largest online advertising distributor, lagging just behind Google. While online advertising can be seen as a modern advancement to the typical print, radio, and televised advertising of previous generations, social media advertising can be especially invasive. Facebook’s marketing analytics allow companies to target specific groups of people to advertise on their feeds. Facebook will use users “likes,” published information, and information generated through its own algorithms to create advertising profiles for each user. Some of this is self-reported, but much of it is not. As one example, Facebook will track the advertisements that you click on or companies that you interact with, not just the pages that you “like.” Therefore, if you decide to tag your friends in a meme put out by Starbucks, Peet’s Coffee could have access to that information and see that you like coffee and memes. The company could then create content that you might like and advertise it to your specific feed.

In 2016, Pro Publica created an advertisement in Facebook’s housing section which they were able to bar African Americans, Asian Americans and Latino populations from viewing. The ad was approved fifteen minutes after Pro Publica submitted it. This type of limiting, specifically in the housing arena, clearly violates the Fair Housing Act of 1968 which states that it is illegal to, “make, print, or publish, or cause to be made, printed, or published any notice, statement, or advertisement, with respect to the sale or rental of a dwelling that indicates any preference, limitation, or discrimination based on race, color, religion, sex, handicap, familial status, or national origin.” While Facebook claims that its policies prohibit advertisers from using the targeting options for “discrimination, harassment, disparagement or predatory advertising practices,” the company’s inaction clearly proves the inaccuracy of that statement. The New York Times successfully created an algorithm to find discriminatory phrases in its ads and had real people review those advertisements before posting. While this process did occur after they were successfully sued in violation of the Fair Housing Act it demonstrates the ability that companies have to find alternative methods when their practices prove to be discriminatory or illegal.

While this process appears to be an advancement in marketing techniques because it allows companies to create a more specific target audience where their ads will be successful, it also enters dangerous territory—chiefly, discriminatory practices. These analytics allow companies to pick their target audience for advertising based on self-reported information such as hometown, current city, race, gender, and age. Additionally, users may “like” problematic pages that cannot be stopped because the statements made on these pages are seen as free speech. For example, Pro Publica conducted a study in September 2017 where they spent $30 to advertise to users who expressed interest in “Jew Haters,” and “History of Why Jews Ruin The World.” These ads were approved by Facebook. When Pro Publica informed Facebook of the categories used to target this advertising, Facebook removed those categories. Product Management Director of Facebook, Rob Leathern, stated “There are times where content is surfaced on our platform that violates our standards. In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

While it may not be the role of tech companies to censor the internet or discourage political discourse, perhaps they could afford some extra development prior to releasing new technology that has discriminatory impacts. These companies have a social responsibility to protect the public from themselves in these circumstances. Although target advertising can be seen as a good way to get a specific audience, marketers need to start viewing this practice as what it really is: a good way to exclude specific populations in a discriminatory manner.

Student Bio: Jennifer Bourne, Suffolk University Law School JD Candidate 2019. She holds a B.A. in Political Science from Boston University.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

 

 

 

 

 

 

 

 

 

Print Friendly, PDF & Email