We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Ethics of Autonomous Vehicles

essay
The whole doc is available only for registered users

A limited time offer! Get a custom sample essay written according to your requirements urgent 3h delivery guaranteed

Order Now

There are over 6 million car accidents in the United States every year. Over 90 percent of these accidents are caused by human error. Experts believe that wide use of autonomous vehicles would lower the number of crashes greatly. Most of the major car manufactures have already committed to researching and developing cars that will drive themselves in some capacity. These vehicles require massive amounts of programming to learn how to react in different situations. There are also many ethical problems in programming a vehicle that drives itself. What happens when a car senses a pedestrian in its path and can’t avoid them? How would a computer programmer handle these situations coming up in the real world? There are also problems in determining blame in a crash with an autonomous vehicle. Would the owner who is not driving be responsible? Would the manufacture be responsible? How about the team that programmed the vehicle? Are they responsible for accidents happening with a car they programmed and is only following the orders given to it? Since these vehicles will be sharing the road with cars driven by human drivers, they will be put in “no win” situations and will have to quickly determine the action to take. Will these vehicles be able to make a correct decision based on all the facts on the road?

Since the invention of the car, there have been people trying to automate the experience of driving. In 1925, a remote controlled car was demonstrated in New York City. This car was operated by a car following behind it controlling the movements of the car. At the 1939 World’s Fair, General Motors sponsored an exhibit which showed radio-controlled vehicles being controlled by electromagnetic fields from circuits embedded in the road. Throughout the 1970’s and 1980’s various groups experimented with similar vehicles controlled by embedded tracks in the road. However, in the 1980’s, Ernst Dickmanns and his team designed a vision-guided Mercedes-Benz robotic van that could go at a speed of 39 MPH on streets without traffic. In 1991, the U.S. congress passed the ISTEA Transportation Authorization bill, which instructed the U.S. Department of Transportation to demonstrate an automated vehicle and highway system by 1997. After years of work, Demo ’97 took place on Highway I-15 in San Diego, California. 20 automated vehicles were demonstrated to thousands of onlookers. Unfortunately, this program was cancelled in the late 1990’s due to budget restrictions at the USDOT. This demonstration though, succeeded in getting most automobile manufactures thinking about their own autonomous vehicles.

In 2009 Google started developing its own line of self-driving cars in secret. General Motors, Ford, Mercedes Benz, Volkswagen, Audi, Nissan, Toyota, BMW, and Volvo, have also been testing driverless car systems. In 2015, Tesla Motors introduced its Autopilot technology for their cars through a software update. The Autopilot mode in a Tesla is not fully autonomous though, requiring the driver to still pay attention and be prepared to take control at any time. Although most other manufactures have working self-driving cars being tested, none have released any fully autonomous cars to the public as of yet.

The advantages of self-driving cars are plenty. Experts believe that the widespread use of autonomous vehicles could reduce traffic accidents by 90% and save thousands of lives. The number of accidents caused by driving under the influence of alcohol or drugs would also be greatly reduced. Another plus would be a decrease in traffic throughout the country. The costs of car insurance would also be much lower in a world full of autonomous cars.

Self-driving cars hold out the promise of being much safer than our current manually driven cars. Yet, self-driving cars cannot be a 100% safe. This is because they will drive at high speeds with unpredictable pedestrians, bicyclists, and human drivers. So there is a need to think about how they should be programmed to react in different scenarios in which accidents are highly likely or unavoidable. This raises important ethical questions. For instance, should autonomous vehicles be programmed to always minimize the number of deaths? Or should they perhaps be programmed to save their passengers at all costs?

Consider now the following scenario. A self-driving car with five passengers approaches a conventional vehicle that for some reason suddenly departs from its lane and heads directly towards the self-driving car. In a split-second, the self-driving car senses the trajectory and the likely weight of the oncoming car. It calculates that a high-impact collision is inevitable, which would kill the five passengers, unless the car swerves towards the pavement on its right-hand side. On the pavement, an elderly pedestrian happens to be walking, and he will die as a result if the self-driving car swerves to the right and hits him. This is the sort of situation in which the human passengers of a self-driving car cannot take control quickly enough. So the car itself needs to respond to the situation at hand. And in order for the five passengers in the self-driving car to be saved, as they are likely to be if the head-on collision with the car is avoided, the car here needs to make a maneuver that will most likely kill one person.

A driving scenario like this brings up a big ethical dilemma. This scenario raises questions about what the autonomous car’s priorities should be. Should the autonomous car be programmed to protect its occupants over all others? Should it be programmed to calculate the greater good and injure the least amount of people? What if in this scenario, the pedestrian is actually a group of deer on the side of the road? Could this car decipher the difference and decides to hit the animals instead of the oncoming car? What if the pedestrian was actually a group of cardboard Halloween decorations that look like people? Would the car detect them as living people and avoid hitting them but kill the passengers in the car? Unless the self-driving car is programmed to respond in determinate ways to morally loaded situations like the ones just described, there is an unacceptable omission in its readiness to deal with the realities and contingencies of actual traffic. For these reasons, automated vehicles need to be programmed for how to respond to situations where a collision is unavoidable, and because of this comes need for ethical accident algorithms.

Analyzing the scenario from a utilitarian point of view, all situations should be handled in the manner that minimizes loss of life. This would mean putting the good of society over the safety of the car’s passenger. This is an inverse tactic to traditional car design, which prioritizes the safety of riders in the vehicle over all else. However, would anybody want to buy a car that will prioritize another person over the owner of the vehicle? If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.

Another way of analyzing from an individual rights perspective is by protecting the car owner’s right to live. The main goal of today’s driven cars is to keep passengers safe. In this way, cars could be programmed to avoid injuring the driver, at all costs. The most common and valuable rights pertinent to autonomous vehicles are the right to life and the right to the freedom of choice. If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?

This brings up another open-ended question about autonomous vehicles. Who is to blame in the event of an accident that is the fault of the autonomous vehicle? There are two parties who can potentially be at fault in any given situation: the manufacturer of the autonomous vehicle, or the person who has chosen to purchase and utilize the autonomous vehicle.

The use of artificial intelligence and neural networks in autonomous vehicles may create surprising and unexpected complications in determining liability, and courts may not know exactly how to approach these problems. In the context of autonomous vehicles, where the vehicle, as opposed to the driver, is presumed to be in control, products liability theoretically fits. If courts apply products liability law to autonomous vehicles, the various manufacturers will potentially face enormous liability. Plaintiffs in tort actions generally sue parties with money, and if there is an accident involving an autonomous vehicle, they will likely try to recover damages from big name companies such as Google, Tesla, and other car manufacturers.

Another option for dealing with autonomous vehicle torts under existing law is to ask consumers to sign waivers that accept the risks of autonomous vehicles and take personal responsibility for accidents. If consumers waive their right to sue the manufacturers, a large percentage of initial products liability lawsuits may be reduced or mitigated.

Another problem facing makers of autonomous vehicles is programming the vehicle to adhere to different the driving laws in each state. What is legal in one state may not be legal in another state. In Vermont, for example, passing another car over a double yellow line is legal. In most other states, this is not legal. What about laws that are on the books but are not enforced anymore? In New Jersey, there is a law in the books that a driver must honk their horn when passing a bicycle or another car. This law was enacted in 1928 and is not enforced anymore but has not been stricken from the books. Will the car need to be programmed to change how it operates and behaves in every state? The answers here are not completely clear.

In conclusion, vehicles are becoming ever more independent from the input of human beings. The creation of autonomous vehicles that are capable of operating without any decision-making from passengers results in a multitude of ethical dilemmas. Questions, such as who should be protected by the car, who is at fault for an accident, and should drivers be allowed on the road must all be answered. These are all ethical dilemmas and the answers to these questions will vary greatly depending on the ethical theory applied to the situation.

Related Topics

We can write a custom essay

According to Your Specific Requirements

Order an essay
icon
300+
Materials Daily
icon
100,000+ Subjects
2000+ Topics
icon
Free Plagiarism
Checker
icon
All Materials
are Cataloged Well

Sorry, but copying text is forbidden on this website. If you need this or any other sample, we can send it to you via email.

By clicking "SEND", you agree to our terms of service and privacy policy. We'll occasionally send you account related and promo emails.
Sorry, but only registered users have full access

How about getting this access
immediately?

Your Answer Is Very Helpful For Us
Thank You A Lot!

logo

Emma Taylor

online

Hi there!
Would you like to get such a paper?
How about getting a customized one?

Can't find What you were Looking for?

Get access to our huge, continuously updated knowledge base

The next update will be in:
14 : 59 : 59