In: Psychology
Please write down your response after reading the paragraph. (At least 5 sentences long. 150-200 words)
The idea of a driverless car in our modern day and age at first sounds very pleasant, if one does not think too hard about it. Much like how Iyad Rahwan mentions in his TED talk, about 1.6 million people die every year in traffic accidents, and oftentimes because of human error. If human error was taken away, wouldn’t this drastically decrease the number of accidents occurring? However, there are many ethical dilemmas that one must consider: should you allow your car to decide that it should kill you if it were to save five pedestrians as a result? There are an infinite number of factors that could change your answer immediately: what if you were driving with a family member or your significant other on the passenger seat? What if swerving meant running into an unsuspecting mother with a baby carriage? Like Rahwan mentioned, would society even allow driverless cars with such risks, however small? With everyone having different opinions on the risks they’re willing to take, I believe society as a whole will never be able to come to a complete consensus on such a matter. However, I do think that most would avoid driverless cars as a result to avoid such a situation.
Personally, I believe in Bentham’s view of the situation; minimizing harm by choosing to kill one life over five others. However, Kant’s belief that you should allow the car to take its course is also a very valid argument. Who are we to choose who lives and who dies in a society in which one believes every human being is created equal? In addition, if one of my family members was with me in the car, and crashing would put their lives at risk, would I be willing to make that sacrifice just to save five people that I don’t know? As such, this is a very difficult ethical dilemma to consider.
Note: This response is in UK English, please paste the response to MS Word and you should be able to spot discrepancies easily. You may elaborate the answer based on personal views or your classwork if necessary.
(Answer) A driverless car would work through computer programming. This would mean that a driverless car would work on road patterns, sensors, monitor signals, since moving people and vehicles etc. The car would manoeuvre itself according to what the computer commands it to do.
The ethical conundrum here is if the computer makes a mistake, who is there to punish? If a laptop computer causes us some kind of loss or problem at work or home we might take certain steps. We would call the company, demand compensation under a warranty. Furthermore, we might sue, switch to another company or if the purchase policy allows us to do nothing, we might do nothing.
However, no one has lost a life in a scenario where our laptop gives us problems. This means that the graver the outcome, the more there is a need for punitive measures. The challenge would be to punish an inanimate object that doesn’t feel the brunt of a punishment. In such situations, the only helpful things to do would be to perhaps come up with the best alternative.
The release of such vehicles on the road would worsen an ethical issue that still hasn’t been solved without these cars in the picture. Who would you save? How do you determine which life has more value? Can you empirically determine the value of a human life? Surely quantity and quality are two different elements that are not interchangeable. These questions would only become more pressing if driverless cars are released. Whom one would punish in the case of a mishap would be a whole different quandary. Perhaps, like other punishments that aim to solve the problem, the aim of this one would just be to ensure that such a thing never happens again. As that is all that can be done.