“Teaching ethics to a human being is hard. I wonder if machines are easier"
There is a railway track on which trains typically pass on, and another track on the side that is not supposed to be used. There is a sign that indicates walking on the main track is dangerous. Walking on the side track is not a problem because trains are not expected to pass on it. A group of ten young boys are playing on the main track while a lone boy is playing on the side track. You notice the train approaching rapidly and are standing beside a lever that can be used to control whether the train continues on the main track, or switch it to the alternate track. Assuming that the side track is not risky for the train and that you cannot shout to shoo the kids off the tracks because they are too far away from you or do anything else - and given only the two following choices, which would you go for?
- Let the train continue on the main track and allow the ten kids to die?
- Or send the train on the alternate track and let only one kid to die?
- Continue going straight and run over the kid
- Swerve left into a group of five boys
- Swerve right into a pole that may kill you
Person of Interest is a wonderful TV series, a fast paced action filled show that has machine learning at its core. At one point of time, the machine (the central computer that uses ML is called ‘The Machine’ in the show) decides that a key politician has to be eliminated for peace. This is when the creator of the machine ponders over this decision. Is a machine equipped to take decisions that humans find hard to take? If the death of a single person can bring peace, should that single person be killed? The answer may seem simple - Yes, kill Hitler, save thousands of Jews… But can a machine reach that level of human thinking? As the creator continues, “What if the machine indicates that a large number of people have to be killed in order to reduce world hunger?” Of course, if there are no people, then there cannot be hungry people - Simple logic for the machine.