By George Barclay

Last week, this blog addressed some of the legal concerns that self-driving cars are likely to produce in the field of tort law.  Besides issues such as negligence and liability, the implementation of self-driving cars also presents come compelling technological questions.

What will self-driving cars consider to be “safe driving”?  Depending on where people live, how they were taught to drive, and habits picked up from years behind the wheel, human beings have drastically different definitions as to what constitutes safe driving.

How will these machines be set up to avoid accidents caused by glitches or shutdowns?  As obvious as it may sound, it bears mentioning that if we are capable of creating self-driving cars, they are just as capable of shutting down or malfunctioning.  Additionally, if improperly protected, these vehicles could be subject to mainframe hacking, putting both the passengers and public safety in jeopardy.  Furthermore, how user friendly will these machines be?

Background

Designed so that human drivers never have to take control of the vehicle to operate it, self-driving vehicles use a combination of software and sensors to drive and navigate.

Currently, there are about five different levels of self-driving cars and trucks.  At Level 1, certain systems within the car such as cruise control or emergency braking can be controlled by the car one at a time.  Level 2 cars offer simultaneous automated features such as steering and acceleration, but a human operator is still required to keep the car driving safely.  In Level 3 cars, the car is capable of managing all its safety-critical functions in certain conditions, but a human driver is required to take over upon an alert from the system.  Level 4 cars are fully autonomous driving in most situations and Level 5 cars have complete capability to self-drive in all situations.

Companies such as Waymo, Uber, and Ford are just a few of many corporations looking to develop self-driving vehicles.  Waymo and Uber have gone as far as to start testing out these vehicles, with Waymo starting a self-driving transportation service within the last calendar year. Ford’s CEO Jim Hackett noted in an interview on April 9, 2018 that while Ford will launch its self-driving car fleet in 2021, it will still take some time to make the vehicles fully autonomous.

One of the major arguments posed in favor of self-driving cars is public safety.  Worldwide, it is estimated that there are 1.35 million road traffic deaths each year.  Research has also found that roughly 2 out of 3 people will be involved in a drunk driving accident during their lifetime.  It is currently estimated that 94% of crashes are caused by human error or human choice.

Regarding the habits of self-driving vehicles, MIT researchers have put together a webpage called the Moral Machine. This tool presents webpage visitors with a series of prompts about driving safety scenarios and then compares participant responses.  The system is designed to gage how human beings drive safely so that these habits can be implemented into self-driving cars as they are being developed.

What exactly is “safe driving”?

At first glance, the introduction of self-driving cars sounds like a great way of improving safety. Automobile-related fatalities caused by drunk driving and negligence behind the wheel continue to be problems worldwide.  The problem is that if these cars are going to be driven safely, whose definition of “safe driving” will be used?  For some people, safe driving means strictly obeying all traffic regulations.  Others see it as driving with the flow of traffic and being assertive with turn signals and braking to other drivers.

While MIT’s Moral Machine project is a good start, projects like this should become much more mainstream, particularly with the fact that different cities have drastically different styles of driving.  Boston, Massachusetts for example, has a tremendous amount of j-walkers and safe driving in Boston means keeping a look out for pedestrians who dart out in the middle of the street.  New York on the other hand does not have widespread j-walking and drivers are encouraged to keep with the flow of traffic and speed up as soon as a traffic light turns green. Surveys regarding safe driving habits in the years to come will need to be tailored to different American cities.

Another effort at creating moral self-driving vehicles is filed in a report by the Ethics Commission on Automated Driving in 2017.  Some essential findings included that (1) self-driving vehicles need to be implemented if the machines can cause fewer accidents than humans, (2) human life takes priority when choosing between property damage and human injury in hazardous situations, (3) there must be clear regulations as to who is responsible for a driving task when humans get into a self-driving vehicle, and (4) passengers must have control over when their personal data is disseminated to other vehicles and used.

Accident Avoidance and Machine Malfunctions

Despite the groundbreaking potential for self-driving vehicles, it remains imperative that self-driving vehicles are able to operate without glitches or shutdowns that lead to accidents.  A potential solution to this could be a default safety mode that the vehicle enters into when the machine breaks down.  But what happens if the safety mode fails to start or is shut off?

Last year, one of Uber’s self-driving vehicles killed a bicyclist in Arizona after the vehicle’s sensors noticed the bicyclist, and failed to brake, despite picking up the bicycle six seconds before the crash.  While the pedestrian was reported to be under the influence of drugs, six seconds while traveling at a speed of 43 mph is plenty of time to avoid a car accident.  Due to a company policy, Uber turned off the emergency driving system in the vehicle prior to the accident to avoid “erratic driving.”

For emergency systems in self-driving vehicles to be truly effective, they cannot be optional and they must be user friendly in the event of an emergency where the passenger needs to take control of the vehicle.  The need for quick fixes in the event of a mishap is also imperative.  Given that thousands of these vehicles could be on the road in the years to come, a massive shutdown of self-driving vehicles could be catastrophic to traffic on the roads and for people who rely on these vehicles to get around.

Cybersecurity

One particular concern that comes with the mass production of self-driving vehicles is cybersecurity.  With the rise of hacking in the last decade, security systems need to be put in place so that the machines cannot be hacked.  A grand scale hack of the operating systems of self-driving vehicles could create the potential for self-driving vehicles to be used for acts of crime or terrorism.  The companies creating self-driving vehicles need to be vigilant about changing the factory security features for the vehicles in order to prevent this from happening.

Conclusion

Overall, the potential for self-driving vehicles is fascinating.  These vehicles have the potential to make our roads safer, expand the economy, and change traffic patterns for the better.  Yet, despite the multiple advantages of self-driving cars, important questions remain as to the technology that will be used in these vehicles.  Without more oversight, self-driving vehicles could lead to continued safety concerns on the road for other drivers and passengers alike as well as the general public in heavily congested areas.  Companies will need to be proactive and patient in making sure that these vehicles can be operated safely on the roads, have functional emergency systems, and can be kept away from the hands of hackers and anyone who would use these vehicles for improper purposes.

George Barclay is a 2L staff member on the Journal of Health and Biomedical Law. Prior to attending Suffolk University Law School, George earned a Bachelor’s Degree in history from Vanderbilt University.  After graduation, George is interested in practicing in the areas of criminal defense, civil litigation, and employment law.

Source

https://www.ucsusa.org/clean-vehicles/how-self-driving-cars-work

https://waymo.com/mission/

https://www.npr.org/2018/10/26/660775910/should-self-driving-cars-have-ethics

https://www.bmvi.de/SharedDocs/EN/PressRelease/2017/084-ethic-commission-report-automated-driving.html

https://www.npr.org/2018/08/21/639646651/watch-self-driving-cars-need-to-learn-how-humans-drive

https://www.npr.org/sections/thetwo-way/2018/05/24/614200117/ntsb-uber-self-driving-car-had-disabled-emergency-brake-system-before-fatal-cras

https://www.engadget.com/2019/04/10/ford-ceo-says-the-company-overestimated-self-driving-cars/

https://apps.who.int/iris/bitstream/handle/10665/277370/WHO-NMH-NVI-18.20-eng.pdf?ua=1

http://moralmachine.mit.edu

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHBL or Suffolk University Law School.