Can International Law Halt The AI Arms Race?

By Ryan Donahoe

The Creation of JAIC

On June 27th, 2018, the Pentagon announced the creation of the Joint Artificial Intelligence Center (JAIC). The main purpose of this center is to “pursue AI applications with boldness and alacrity while ensuring a strong commitment to military ethics and AI safety”. In pursuing this objective the Pentagon wishes to develop AI based weapon systems to prepare for a new age of combat. This new research center will be budgeted a total of $1.7 billion dollars over the course of the next five years. This staggering amount of money is evidence of how serious the United States is about incorporating AI into modern warfare.

Project Maven

One of these Artificial Intelligence projects has been named “Project Maven”. Project Maven is quintessentially an identification tool. Developed by an Algorithmic Warfare Cross-Functional Team, the primary objective of project maven is to use artificial intelligence to look for objects of government interest by analyzing video and still photographs. The machine is able to sift through hours of video footage and hundreds of photographs using biologically inspired neural networks and deep learning algorithms. Although Project Maven does not include any weapon applications yet, there have been widespread protests throughout Silicon Valley about the development of this type of system. This tool will be able to identify suspects or areas of interest and continue tracking.

AI Arms Race

Just as the United States is firmly committed to developing AI-based weapon systems, so are other global powers such as China and Russia. China especially has had many distinct advantages over the United States in developing AI. Xi Jinping ‘s “Belt and Road” initiative has consolidated the nations efforts to create an international network of trade and infrastructure. The Chinese government is steadfast in creating artificial intelligence applications for their own weapon systems, such as the incorporation of AI into missile systems. Russia has been another country that has opposed a ban on artificial intelligence based weapon systems. Just like China and the United States, they have begun funding their own research into military applications of AI.

International Backlash

The international community has been very critical of these AI based weapon systems. They are particularly afraid of the human rights questions that arise through the development of these weapons. The Convention on Controversial Weapons (CCW) has been pushed to discuss these issues by the Campaign to Stop Killer Robots.  In late-August, the sixth meeting on lethal autonomous weapons recommended continuing deliberations on this topic next year. Although they have not come to a conclusion yet on the necessary treaty provisions, some states are calling for a preemptive ban treaty. Although this seems to be a promising start there are nations such as the United States and Russia who are against this preemptive ban.

Application Of The Martens Clause

The Martens clause provides a unique solution to hinder the development of these AI based weapon systems. The Martens clause is a humanitarian international law that protects civilians and combatants when there is no treaty provision on that topic that exists. Since there is no applicable treaty provision to the development of AI weapon systems this may be a viable solution.

The development of AI-based weapon systems with the ability to target and destroy targets without human control could violate both prongs of the Martens clause, one of which is the principles of humanity. You could argue that the development of these weapons directly contravenes the principles of humanity because it is a machine capable of taking a human life. These machines would feel no emotions and would be making their decisions based off pre-programmed algorithms alone. Even if you argued that these machines primary purpose would be to protect human lives, you would still be hard-pressed to defend the fact that these machines would be incapable of weighing the value of a human life.

The development of AI also runs counter to the dictates of public consciousness in the Martens clause. The dictates of public conscious are moral guidelines as to the general knowledge of right and wrong.  Currently, in the international community, there is a strong sentiment that the development of these AI based weapon systems is wrong. More than 26 states have supported a preemptive ban and over 100 have called for a legally binding instrument to stop the development of these weapons. For now, the international community will have to continue raising awareness for this issue to grow support on a ban on AI-based weapons systems. In the meantime however global superpowers such as the United States are full steam ahead funding military applications of AI.

Student Bio: Ryan Donahoe is a 2L at Suffolk University Law School. He holds a Bachelor of Arts in Political Science from Syracuse University. He is originally from Franklin, MA.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

 

Print Friendly, PDF & Email