Unauthorized Use of Force and Robotic Security Guards

By Jordan Bigda

 

A Silicon-Valley start-up, Knightscope, has been making robotic security guards for over a year now.  These robotic security guards are designed to roll around using sensors and software to assist human security guards on patrol.  The CEO of Knightscope, William Santana Li, discusses the major benefits of these robotic security guards by stating that “[t]hese machines can do 10 times — and one day will do 100 times more — than a human could ever possibly do. They can read 300 license plates a minute and run it against a database.” See Ben Johnson, Are autonomous robots the future of mall security?, MARKETPLACE (Jan. 12, 2017), archived at https://perma.cc/Q6WS-MB2F.  The CEO also said that these robots could be used to prevent acts of terrorism by being able to identify bombs, and when speaking about the issue, referenced protecting public places such as schools, movie theaters, or malls.

Many police departments and military units around the country already have bomb squad robots, which help police forces detonate and go into areas with possible bombs, without jeopardizing the lives of police and military personnel.  In July 2016, the Dallas Police Department used a bomb squad robot to kill an individual who had killed five police officers.  Most jurisdictions allow law enforcement to use lethal force against an individual who is an imminent threat to the public.  However, for the use of lethal force to be justified, it must be reasonable in each circumstance it is used.  In this case, using a robot to kill an individual who was a threat to police officers and others was deemed justified.  However, a major distinction between the bomb squad robot used in Texas and these robotic security guards is that the bomb squad robot was controlled by a human, who was guiding it by remote.  Under existing law, the robot was no different than a hand gun operated by a police officer.  Moreover, the robotic security guards move on their own through sensors and do not require human intervention in order to function.  These robots are bringing to be programmed to make judgements without any human input or intervention.

Because the Knightscope security robots move without the assistance of humans, these machines could and likely will pose issues around the authorized use of force.  On its website, Knightscope notes that IBM Watson technology will soon be integrated into its security robots.  IBM Watson technology would enhance these robots even further and make it so these robots could think and respond on their own through programming.  Currently the robotic guards are equipped only to observe, report, and intervene vocally.  However, it is foreseeable that as the product gains popularity, users will want these systems outfitted with the ability to detain suspects and physically intervene in violent situations.

Under existing law, the use of force must still be authorized by a human, and it is unlikely that without legislative changes these machines would be able to intervene in violent situations or detain suspects without direct human input.  If this artificially intelligent robotic security guard then uses unreasonable force in trying to protect a mall, a movie theater, or school, the question also arises as to who would be labile for unauthorized use of force.  The manufacturer, or the client.  A recent case reported that one of the robots hit and knocked over a toddler at a mall in San Francisco.  The robot, weighing 300 pounds, was reportedly moving fast towards the child, the child ran towards the robot and they collided.  The child had cuts and bruising, but was ultimately okay.  Even though the robotic security guards have sensors to prevent these types of incidents, there was a malfunction when the robot and the child collided and the robot did not stop.  This malfunction resulted in injury, but ultimately the fault of Knightscope, the manufacturer, who responded by taking the robotic security guards out of the mall for a period of time.  It is hard to imagine who would be at fault, if the robot used unnecessary force without malfunctioning, this could be further complicated by the need for third party human input before force could be used.

 

Student Bio: Jordan Bigda is a staff member on the Journal of High Technology Law and a current second year law student at Suffolk University Law School.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email