
Suppose that an autonomous car is faced with a terrible decision to crash into one of the two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize (wrongful) harm to others — a sensible goal — which way would you instruct it go in this scenario?
Let’s start with the only fact we have: that the autonomous car is able to recognize the cars on either side. Not necessarily make and model like in the question, but size at least. On top of that a presumably safe assumption is that the autonomous car can recognize the location and number of people in itself, based on current, easily available, sensor technology. With these 2 pieces of information, the car needs to choose whether to hit the SUV or Mini-Cooper. I believe it should hit the Mini-Cooper because it is most likely to save the lives of its internal passengers.
To start with, we must establish how much better an autonomous car should be then a human. My mind jumps to a court room 30 years in the future, when an injured driver is suing the owner of an autonomous vehicle as well as the company who made it. I will not get into the legal issues of responsibility in this situation, the main question for me is given this impossible situation, if a human had been driving, could the human have handled it better or delivered a better outcome. I will later prove this question is not relevant, but for now let’s assume the outcome is all that matters.
There is a range of outcomes in this situation and the line below depicts all possible outcomes from worst to best.
Worst____________________________________________________Best
The measurement here seems to be life, not only number of lives, one might argue that 5 people in a coma or fully paralyzed is worse than one death, but simply the overall quality AND number of lives preserved. The “Best” outcomes include the highest quality and number of lives saved, the “Worst” outcomes would be the least quality and number of lives saved. First, all factors that contribute to the outcome, that are outside the realm of available information in the moment, cannot be considered in the decision of which car to hit. For example, if the driver of the car was killed as a result of a pre-existing heart condition after the crash.
For everything under the control of the google car, looking back at the incident after, did the autonomous car make the right decision? The right decision being to maximize the number and quality of lives saved. Another way of asking is, could the autonomous car have saved more lives, using available information?
What does available information mean? Let’s assume a human driver and the car have equal information. They each know the size of the cars on either side, they know that each car has at least one driver (we will assume neither is an autonomous vehicle) and they know the number of passengers in their own car and where they are in the car. For this argument let’s assume the driverless car also can detect the identities of each person in the car (Family vs. friends), and therefore if children or babies are present. Both the human and the car have common sense in their brain/code and each make a decision to maximize life (children are given slightly more priority as is the case with most parents). They each hit the mini-cooper, because it will likely save more lives in their own car. If they hit the SUV, it will likely hurt them more than the SUV and so the Mini-cooper is the logical choice.
Now we start with the “what ifs”. What if a child is on the right side of the car. One parent might be completely selfless, and do whatever it takes to save the child, even if that means sacrificing their own life to save the child, and hit the SUV. Another parent, might hit the Mini-Cooper with the same intention thinking that the child is in more danger from the larger SUV, even if they are on the other side of the car, and so would hit the Mini-cooper. Two equally valid arguments for two different decisions. So from this can we determine that “Which car is hit?” is not necessarily important. The more important question is “Why the car was hit?”
The answer would be in the form of “This car was hit because of information, X, Y, and Z, and that is why it was the best choice in the moment.” Therefore one might be able to argue that a better outcome could have been created because it doesn’t matter which car is hit. For example, a driverless car with one passenger hits the mini cooper with 5 people in the car. Looking back there was only one driver in the SUV, who was obviously sitting in the driver’s seat on the far side of the SUV. Therefore, if the google car had hit the SUV, only the one passenger of the driverless car would have likely been seriously injured or killed. Instead 5 people are dead because it hit the mini-cooper. Could we have expected a human driver to see how many people were in the Mini-cooper and hit the SUV because of this? Possibly some selfless people who see the mini-copper’s occupancy, would make this choice, but self-preservation might seem like the more logical choice for a human driver. While this scenario opens other issues about valuing different human lives and whether a driverless car should mimic a human in this situation or not, these choices start with access to information. Should a driverless car be able to gauge the number of people in another car like some (not all) humans could?
A human collects information with our senses, just like the google car with its sensors. Accessing this information is key. Our brain is very good at analyzing a scene and honing in on important sights, smells and sounds that might help us make the right decision. The brain of an autonomous vehicle could conceivably surpass our ability if enough specific situations are programmed into it. For example cameras and image recognition software along with heat sensors, could recognize the number of people in another car, possibly even whether any are children or not, something most humans would be unable to register in a split second decision. So I could argue that driverless cars should be judged based on available information, because they should be able to gather more and interpret more than a human could.
Going back to situation of a 5 people in the Mini-Cooper and 1 person in the autonomous car and Vovlo; Should the driverless car kill its own passenger in order to save the lives of the 5 passengers in the mini-cooper because it knows that would maximize the total quality and number of lives saved?
This is very similar to “The fat man on a bridge scenario” and is related to the principle of “Wrongful” harm. I will define “Wrongful” harm as harm that was intended and could have been prevented under moral obligation. A train is about to run over 5 people, and the only way to stop it is to push one large man over a bridge who will derail the train. I believe one should not push the man over.
1. This scenario requires one to commit murder in order to save the 5 people on the tracks below.
2. There is a law that states that one is only responsible for a crime if they could have prevented it.
3. Murder is preventable.
4. Laws are the outcome of hundreds of years of moral debates and actual difficult situations that require a morally correct outcome, therefore laws are arguably the best and only way to defend the right way to handle a situation.
5. Due to the law, the person who pushes the fat man will likely receive punishment for murder.
6. There should be no obligation, moral or legal, requiring someone to sacrifice themselves for others. A person should choose to sacrifice themselves, they cannot be expected to by others or feel a moral obligation whether it is jumping off a bridge or going to jail for pushing someone else off the bridge.
Therefore, there is no moral obligation to save the five people on the train tracks. The 5 people would be harmed, but it is not “Wrongful” harm because there was no intention for them to die, nor was there moral obligation to save them.
The driverless car should also have no obligation to save the 5 people in the mini-cooper. It should act in the interests of its own passengers. Although it possibly has access to more information than a human driver would, it should not use the information if it compromises the life of its passengers. Assuming the Google car made no error in ending up in the situation of choosing which car to hit, it should not and cannot be held responsible for choosing the outcome most likely to save its own passengers, no matter what the outcome is.
Startup plug:
ClearMechanic has plans to build solutions in many industries where paper checklists are still common practice. They make simple tools for ease of use and accountability. AND they have some awesome traction with there first product and I would not be surprised if they raise some serious funding soon and really step on the gas.
ClearMechanic is mobile inspection software for automotive service centers. We enable service centers to conduct digital inspections with any Apple or Android device, including real-time photos, videos and diagrams. Consumers are more likely to trust and approve recommendations when visual evidence is provided. We have documented a 20–40 point increase in success rates on service recommendations when ClearMechanic is used.