Moral Machines

Moral Machines is an interesting piece by Gary Marcus in the New Yorker, exploring the increasing confrontation between automated technology and moral decision making.  That confrontation is the site of an important dialogue about the complexity of morality and human behavior.

One front has been the driverless car – now functional as a machine, but dysfunctional within our current ethical-cultural-legal framework.  The driverless car confronts ethical frameworks based on personal responsibility.  Although it may be dramatically safer statistically for me to ride in an automated vehicle, I would be reluctant to give up my sense of personal control to a machine.  I would rather take a 1:100 statistical risk of crashing due to my own error, than a 1:1000 statistical risk of crashing due to a program malfunction – because on a gut level I believe that personal control is equivalent to safety.  Beyond my personal reluctance to give up perceived control, it challenges the existing social systems that reinforce personal responsibility – licensing, insurance, laws, justice.

The article reveals another side to the confrontation, the necessity for the machines themselves to have moral reasoning coded into their operations.  Marcus gives the example:

“Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.

It’s the Trolly Problem, no longer a thought experiment but a real world decision, left to a computer.

Marcus is lucid about the fundamental issue with this problem: morality is complex, dynamic, relational and evolving, while the codes that run programs are brutal and rule based.

‘The thought that haunts me the most is that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).”

This perhaps always has been a problem that moral reasoning struggles with. How can complex truths can be reflected in institutional frameworks?  It is a great social taboo to let complex, dynamic, relational and evolving systems be the mess that they are.  Institutionalization equals validity.  But a judge needs a jury…

We might find that jury in the emerging field of machine learning.  Generally, machine learning involves a computer program being able to learn without being given an explicit program.  By providing the system with a lot of training data, it “learns” by recognizing and responding to patterns, rather than by applying a given set of rules to execute.  This enables the program to engage meaningfully with tasks that contain a lot more complexity than we have been able to capture in a set of codes.  Voice and image recognition software are examples of this.  It is very difficult to tell a computer how to determine an outline of an object or recognize the word “fork” in 70 different accents.  But, given a ton of raw data (via Youtube images and Googlevoice audio respectively), programs have learned how to make these differentiations with more complexity than a programmer could write.

Google fed a machine learning network 10 million unlabeled still images randomly extracted from Youtube videos, and without coded direction, the network developed the capacity to recognize this pattern:

No one told the program about cats.  The program learned to recognize cats by observing our own human behavior. It may not be a pattern of human behavior we would ideally want to represent us, but it is accurate.  Would machine learning networks fed data about human behavior over time learn to recognize patterns of behavior at moral choice-points?  And how would we feel about those results?

Moral decisions will always provoke discomfort.  A range of emotional and intellectual responses results in friction, debate, and outcry in any community.  There is no one answer to the Trolly Problem; leaving it to machines will not solve this part of the human condition.  But I wonder if the most moral machines would reflect the ambiguous cumulative wisdom of our cultural record.  Could a learning program receiving input from experimental and historical data on moral decision-making reflect the true complexity of morality more accurately than the rules and codes we write?  To me, the diversity and fallibility of human morality reflects the world I want to be a part of more than an infallible application of a philosopher-king’s moral ideals programmed into my environment.