Moral Machines is an interesting piece by Gary Marcus in the New Yorker, exploring the increasing confrontation between automated technology and moral decision making. That confrontation is the site of an important dialogue about the complexity of morality and human behavior.
One front has been the driverless car – now functional as a machine, but dysfunctional within our current ethical-cultural-legal framework. The driverless car confronts ethical frameworks based on personal responsibility. Although it may be dramatically safer statistically for me to ride in an automated vehicle, I would be reluctant to give up my sense of personal control to a machine. I would rather take a 1:100 statistical risk of crashing due to my own error, than a 1:1000 statistical risk of crashing due to a program malfunction – because on a gut level I believe that personal control is equivalent to safety. Beyond my personal reluctance to give up perceived control, it challenges the existing social systems that reinforce personal responsibility – licensing, insurance, laws, justice.
The article reveals another side to the confrontation, the necessity for the machines themselves to have moral reasoning coded into their operations. Marcus gives the example:
“Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.
It’s the Trolly Problem, no longer a thought experiment but a real world decision, left to a computer.
Marcus is lucid about the fundamental issue with this problem: morality is complex, dynamic, relational and evolving, while the codes that run programs are brutal and rule based.
‘The thought that haunts me the most is that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).”
This perhaps always has been a problem that moral reasoning struggles with. How can complex truths can be reflected in institutional frameworks? It is a great social taboo to let complex, dynamic, relational and evolving systems be the mess that they are. Institutionalization equals validity. But a judge needs a jury…
We might find that jury in the emerging field of machine learning. Generally, machine learning involves a computer program being able to learn without being given an explicit program. By providing the system with a lot of training data, it “learns” by recognizing and responding to patterns, rather than by applying a given set of rules to execute. This enables the program to engage meaningfully with tasks that contain a lot more complexity than we have been able to capture in a set of codes. Voice and image recognition software are examples of this. It is very difficult to tell a computer how to determine an outline of an object or recognize the word “fork” in 70 different accents. But, given a ton of raw data (via Youtube images and Googlevoice audio respectively), programs have learned how to make these differentiations with more complexity than a programmer could write.