A New Bill of Rights? How Discriminatory AI Has Prompted Potential Legislative Changes

By: Taylor Sullivan

Artificial Intelligence (“AI”), has become increasingly prevalent because of its ability to improve productivity and efficiency.  It is now used in our everyday lives through things like facial recognition in our phones, algorithms that help the hiring process, determining funding for housing, and more.  The technology can be quickly adapted, avoid human error, work fast, and provide productive results.  With society’s growing reliance on AI, the laws protecting fundamental rights such as freedom, privacy, and fair trials must grow with it.

Two major issues have arisen as AI has taken the world by storm, the first being privacy concerns.  As AI is used publicly on roads and through street cameras, people have become increasingly worried about what kind of information is being retained.  Further, new smart technology has people wondering what personal information data may be retained through AI listening to conversations or tracking usage of smartphones.  Increased transparency has been explored as a solution to privacy issues in other countries.  This requires that entities provide disclosures relating to uses of AI, including a complete description of what and how data is collected, used, and protected as well as providing individuals the opportunity to understand how this new technology uses their personal information.

The second major rights-based issue related to the use of AI is discrimination.  Machines and technology learn through the use of previous data to train them on how to respond to a number of different circumstances.  The issue with this is that the training data in certain cases is discriminatory and thus produces results with a discriminatory effect.  For example, a virtual assistant trained to understand certain voices may not understand southern accents.  More consequentially, complex healthcare algorithms trained to detect disease in white patients may discount the severity of a disease in a black patient.  It is a huge issue that discriminatory data yields discriminatory learning which progresses past prejudicial ideologies that have been the focus of major societal changes recently.

In an attempt to ensure that AI-related security concerns are well defined and regulated, the Biden Administration has recently proposed the idea of an AI Bill of Rights.  The new AI Bill of Rights could include a number of enumerated rights and possibilities for protection of personal information.  These include the right to know when and how AI has influence over decisions that might affect civil rights; protecting individual freedom from being subjected to AI that may not be reliable or unbiased; protection from surveillance that may be discriminatory; or providing adequate recourse if a person is unjustly affected by a decision that was AI-influenced.

This type of regulation will likely seek to find a balance between the potential that AI has both economically and societally and concerns of reliability and privacy within those tools.  European countries have already begun to regulate risky applications of AI.  This includes a current ban on government use of real-time scanning of facial features in public spaces until the technology can be shown to be accurate and non-discriminatory.  The human rights chief of the United Nations has discussed prohibited applications such as social scoring systems that judge people based upon behavior and categorize people based upon ethnicity, gender, or other classes.

The White House Office of Science and Technology Policy, headed by Eric Lander and Alondra Nelson, has begun tapping into a fact-finding mission to consider the potential issues and concerns with biometric AI.  In an article addressing the concerns that this new bill of rights proposal hopes to address, Lander and Nelson ask for opinions and comments from experts across many fields including the federal government, academia, the private sector, and the public at large.  The goal is to enumerate a set of rights that protects commonplace concerns with potential privacy and discriminatory implications of new AI.

Lander and Nelson report that some AI that has proven to repeatedly yield discriminatory results. One example is that of mortgage approval algorithms that use zip codes correlated with race and poverty to assign risk and finance categories.  Thus, people living in these areas tend to receive less favorable mortgage offers, creating a vicious cycle of low-income housing in the same areas and to the same classes of people.  Lander and Nelson addressed the fact that the Bill of Rights was built as a check on governmental powers that guaranteed freedom of expression and assembly.  The AI Bill of Rights is intended to continue providing a check on how the government and/or private entities use AI.

Technology as powerful and useful as some new AI systems can have dangerously prejudicial outcomes if not properly protected.  As society works to improve and outlaw systemic racism, it is imperative to ensure that the systems of the future follow suit.  A set of enumerated rights and regulations such as that proposed can create a concrete set of guidelines that will help shape how AI is improved and used.  Major entities, like Microsoft, have backed efforts to limit AI and how it is used in an attempt to protect fundamental human rights.  As the world changes and the technological revolution continues, world regulators must take hold of the ever-evolving technology and mold it in a way that is most beneficial to our society and core values.

Student Bio: Taylor Sullivan is a third-year law student at Suffolk University Law School. She is a staffer on the Journal of High Technology Law.  Taylor received a Bachelor of Arts Degree in Criminal Justice and Emergency Preparedness, Homeland Security and Cybersecurity from the State University of New York – University at Albany.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email