By Jordan Bigda

The CEO of IBM recently stated that Watson will be used by more than 1 billion consumers by the end of 2017.  Watson is a form of artificial intelligence (AI) created by IBM that can think cognitively, like a human.  IBM states that Watson works with “businesses, scientists, researchers and governments to outthink our biggest challenges.”  Watson has the capability to interact, reason, learn, and understand humans.  IBM is currently pitching Watson to work as a “Virtual Assistant” for agencies that is customizable for any purpose.  Additionally, IBM is pitching Watson Explorer, which is a cognitive search and analysis platform, which can be used to find trends to improve decision making, customer service, and a company’s return on investment.  

For the past few years, Watson has been used in medicine and medical research.  By using Watson, hospitals and medical research laboratories are able to sort through more information faster, suggest ideas for new trials, and even diagnose patients.  Shockingly, Watson is able to read 25 million published medical papers in just a week’s time.  If law firms or courts utilized Watson in a similar manner, Watson could go through all federal and state laws and current cases, giving the robot extensive knowledge of the law and case law our court systems rely on.  Law firms would no longer have to hire associates to conduct legal research if Watson could cognitively internalize and analyze all of the law in the United States.  

Almost everyone uses some form of AI in their daily lives today (i.e. Google, Siri) and as IBM predicts, the majority of consumers will be using Watson in our daily lives by the end of 2017.  But, Watson and specifically, the capability of AI being able to think cognitively makes many people in the legal profession both wary and curious about the legal implications to come from this expanding area of technology.  These legal concerns lay within the realm of constitutional law.  

Typically, there are not any constitutional problems for the basic concepts of certain types of AI, such as Google search and Siri, which are programmed by humans to search through information and then relay this information back to the human.  The main reason for the lack of constitutional concern is the fact that these types of AI do not have cognitive abilities.  However, technological advancements in the area of AI are becoming more advanced and, artificial intelligence, such as Watson has the ability to think cognitively.  This brings us to our main constitutional concern: because these robots can think for themselves, does this mean they are entitled to rights?  Authors and journalists have been researching and writing about this question over the past six or so years.  This debate boils down to whether robots should be given personhood because if so, then these robots would be protected under the Constitution as a person.  Robots are capable of hearing and processing informing, responding to language, and possibly even committing crimes.  Considering this, perhaps the constitution should apply because AI is able to do these things on its own without direct human intervention.  

The European Union considered this and in January 2017 passed a report that would make a new legal status for robots, electronic personhood.  The makers of this report said electronic personhood is similar to corporate personhood, and makes it so companies are responsible for the robots and AI they are putting into the marketplace.  Additionally, giving robots a status of electronic personhood allows them to be brought into the system of civil liability.  The report notes that the burden is not severe for a person bringing a case against a robot.  The report states that the party bringing the claim must establish that there is “a causal link between the harmful behavior of the robot and the damage suffered by the injured party to be able to claim compensation from a company.”  The EU is the first government to actually give robots rights and a legal status.

But at the end of the day, Watson was created by humans and therefore, can only do as much as humans program it to do.  A rather humorous example of this was when Watson’s researcher decided that Watson should memorize The Urban Dictionary, which is a collection of slang.  Researchers believed that this would allow Watson to understand the way people in the real world communicate with one another.  However, this backfired when Watson began to curse and the researcher deleted The Urban Dictionary from Watson’s memory and programmed it so Watson could no longer swear.  

 

Jordan Bigda is a staff member on the Journal of High Technology Law and a current second year law student at Suffolk University Law School.  

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly
Skip to toolbar