Battle of the Bots: Using Artificial Intelligence to Combat Data Bias and Discrimination

By: Kisha Wilson

The last decade in technology has been marked by increased access to enormous amounts of data. AI and machine learning are able to unlock key insights and allow companies to optimize interactions, illuminate patterns, and even predict decision making.

AI can be complicated but, simplified, it amounts to the ability of programmers to train algorithms to automate decision making using data. This machine learning allows the algorithms to predict outcomes and make decisions based on patterns. Businesses all over the world are using AI to optimize their products and services in every industry from advertisements on social media to stock market predictions, to employment screening. But without the proper controls in place, AI can also perpetuate harmful patterns like discrimination, prejudice, racism, and sexism, among other evils. Because AI is informed exclusively by data and assumptions, the bias in the data and by the programmers of those assumptions make their way into the models.

As technology platforms are now reaching wider and wider audiences, the discriminatory impact of faulty algorithms has the potential to accelerate and amplify the worst aspects of society. In fact, nearly 90% of Americans are using AI products in their everyday lives, according to Gallop. As a result, negative impacts of AI discrimination have become among the most pressing and high profile threats with some of the top technology companies in the world, like Facebook and Google, among others being called out for their contributions to discrimination in housing, credit, employment, and other opportunities.

In 2019, the Department of Housing and Urban Development (“HUD”) charged Facebook with housing discrimination after it found that it allowed advertisers to filter audiences based on race, gender, and other protected characteristics. While Facebook had begun to make changes, including removing discriminatory filters for housing, credit, and employment ads, HUD also found that the AI that Facebook’s ad platforms are built on was discriminating even without inputs from discriminatory filtering by advertisers. Other major tech platforms that were also investigated for similar issues of discrimination have also made substantial changes to their ad platforms in an effort to reduce discrimination. Similar discrimination suits alleging gender-based discrimination and other civil rights offenses have also been filed, sending tech giants scrambling to reform.

However, given the opacity of algorithms, the presence of proxy variables that mask discrimination, and protections like Section 230 of the Communications Decency Act of 1996 (“CDA”), which often prevents platforms from inheriting the liability of third-party content providers, there is a degree of difficulty in determining where the liability for discriminatory digital practices truly lies. This makes remedies even harder to come by. With calls from the international community to address data bias and legislative bodies slow to act, some are calling for programmers to fight bias with fire using AI to detect and eliminate bias in the datasets before they get baked into the algorithms. Using AI to fight bad AI might be the fastest route to combating discrimination.

The growing industry of technology companies, data scientists, and other people working to eliminate AI bias are called the “AI Fairness” Industry and major players in the tech industry are increasingly offering AI-based solutions to eliminating bias. AI-based solutions like IBM’s AI Fairness 360, (which offers a “… toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias”); Microsoft’s Fairlearn (which uses an “interactive visualization dashboard and unfairness mitigation algorithms”), Google’s ML-fairness-gym (which offers a “set of components for building simple simulations that explore potential long-run impacts of deploying machine learning-based decision systems…), and many others, help to demonstrate how much more quickly tech players can spin out solutions than purely through lawmakers or courts. Technology companies are not the only ones getting involved in AI Fairness. Researchers at Penn State and Columbia University have also developed an AI tool for detecting discrimination.

Innovations in AI Fairness and the global drive to identify and mitigate data discrimination has occurred all while U.S. courts and legislators try to catch up to technological advancements still in motion. Lawmakers are now seriously considering whether laws like Section 230 are having a negative impact on fairness and whether or not civil rights protections are adequate to address current and future technology in search of legal solutions. Meanwhile, the impacts of harmful algorithms are still perpetuating discriminatory decisions every day. As protests in the U.S. and around the world continue calling for a rapid end to discrimination, AI may be the fastest path toward fairness.

Student Bio: Kisha Wilson is a third-year evening law student at Suffolk University Law School. She is a staffer on the Journal of High Technology Law and a Legal Innovation & Technology (LIT) Fellow in the LIT Lab at Suffolk University Law School.  Kisha received a Bachelor of Arts in International Relations from Boston University.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

 

Print Friendly, PDF & Email