Saturday, May 11, 2013

With Higher Knowledge Come Higher Responsibility


The other day, at work, (and by now you know I work for a Japanese Automotive Electronics company), we talked about autonomous cars for consumes. Since everyone is either technology freak or car freak the discussion was pretty intense.
I explained to every one the ethical issue surrounding autonomous cars that may be not be completely resolved or resolvable by technology.
The matter is this: an autonomous car will with absolute certainty be faced with a situation where it has to choose between two actions each will be killing a different person. Suppose two person suddenly dash in front of the car to the left and to the right, and suppose that the car is moving too fast to stop. it can veer to avoid one person with certainty. But which will it choose?
Another scenario: the car can brake very hard and avoid killing a pedestrian, but in the process it will have killed the passenger because the car is mechanically able to endure much higher de-acceleration than its occupants.
The legal problem also, if I configure the car, or if some car company configure the car to always protect its owner (rational), that I the owner, the designer, the manufacturer is then liable to be sued for killing people?
"But your honor, the car swerved!! I had nothing to do with it"
Okay, so the people who want autonomous cars (myself partially included), will say that with better equipment, high-speed video/audio recording and black-boxes, there might be far fewer arguments about who was responsible for accidents. But there are some things in our current law that are absolute. If a car hits a person inside the cross walk, the car is always responsible. If the car is rear-ended the car in arrear will be responsible. What will happen to these absolute laws that are in many circumstances unreasonable but serve to protect the safety of the population?
And finally, even if, and I believe it will, autonomous vehicles reduce death to 1% or less of today's vehicle related death rate, that 1% where two person dash in front of the car, and the car has to choose, what then? Why is this so hard?
One of the big problems is informed decision is hard. The car, given today's technology, machine learning technology for object detection, vision algorithms, radar, laser range scanner, eeg/ekg, EMR technologies can pretty reliably detect with plenty of time to choose which one to save, that there are two person dashing infront of the car one to the left, one to the right, velocity, estimated trajectory, mass, the certainty of these estimate and the margins of error (where else could each person likely be by the time we collide, etc.)
The reason human get away with killing in this situation is that we do not have the speed and ability. It is beyond our control--until we programmed a computer to do it, and then we are suddenly faced with choice that we never had to make before: kill left, kill right or maim both? or risk killing both? or kill myself to completely avoid  their injury?
Hmm, let's see, What would Confucius allow? What would Jesus insist? Well, I don't want to be killed, so don't kill other people. I would want other people to save me so I would want to brake an save both crazy people. Hmm, I guess it really depends on the person's desire. One would say a more moral person may not wish for another moral entity to suffer in exchange for his own sake, as well to exchange another's life for his own. But by and large most people would ask the car to save himself no matter which place he is in.
The moral problem arise in that we are not in any of those three situations. We are in the autonomous car's designer's shoes. We are in Asimov's shoes. What should we write as the laws of autonomous vehicles? When we know that at some point, the car will know almost certainly that it must kill/damage/disrupt someone/thing, and knows exactly which wire to send electric signal down to to choose which person. What should we tell the car to do?
Because soon the car will be looking at that scenario in slow-mo... with 10ms to decide and then 250ms seconds to turn the steering wheel left or right and apply brakes.
So, as you can see, the mere knowledge of morality and capability to choose encumber us with the responsibility of behaving morally. Because I know it's wrong, I must not do wrong. Another person may think that the root of this evil is the fact that I know of this moral dilemma and that I have gained the speed to travel fast or gained the speed to determine people's fate.
I wonder if the are right that those things are works of devil and that the absolute best moral thing to do is just to stay away from them? I should consider this carefully. What if I find that it is wrong for me to live? or wrong for me to blog about morality? What if it is found that internet is not moral? or god forbid that it is immoral to have stereo audio in cars? Because I already of the ability to terminate any one of them--at least for myself.
*shiver*
p.s.
I can accept an argument that placing one's self into a situation where there is no moral choice is immoral. The autonomous car makers will insist that car drive carefully so that it will never be faced with 2 people in said situation. But somehow, science, technology--human inquiry--may find a way to inform us that that is just delusional, that it is provably impossible to avoid crazy human. ;) back to square-one I suppose.

No comments:

Post a Comment