Recently, the “trolley problem,” a decades-old thought experiment in moral philosophy, has been enjoying a second career of sorts, appearing in nightmare visions of a future in which cars make life-and-death decisions for us. Among many driverless car experts, however, talk of trolleys is très gauche. They call the trolley problem sensationalist and irrelevant. But this attitude is unfortunate. Thanks to the arrival of autonomous vehicles, the trolley problem will be answered—that much is unavoidable. More importantly, though, that answer will profoundly reshape the way law is administered in America.
WIRED OPINION
ABOUT
Jay Donde (@Jay_Donde) is an attorney in the San Francisco office of Morrison & Foerster LLP, where he practices privacy and data security law.
To understand the trolley problem, first consider this scenario: You are standing on a bridge. Underneath you, a railroad track divides into a main route and an alternative. On the main route, 50 people are tied to the rails. A trolley rushes under the bridge on the main route, hurtling towards the captives. Fortunately, there’s a lever on the bridge that, when pulled, will divert the trolley onto the alternative route. Unfortunately, the alternative route is not clear of captives, either — but only one person is tied to it, rather than 50. Do you pull the lever?
Now, consider this: Once again, you are on a bridge underneath which passes a railroad track to which 50 people are tied. This time, however, there is only one track and you notice the trolley sooner, about a mile behind you. Next to you on the bridge is a heavy man, leaning precariously over the railing. A terrible thought comes to mind — you could easily push the man onto the track. When the trolley struck him it would stop, sparing the lives of the 50. Do you push the heavy man?
There are no right or wrong answers here. The problem, rather, arises from the fact that many people say they would, regretfully, pull the lever, but then recoil at pushing the heavy man. It is difficult to identify a moral principle that would both be acceptable to most people (i.e., they would not object to its consistent application in all scenarios) and simultaneously justify each of the foregoing postures. For example, if your guiding principle is to minimize casualties, there should be no meaningful distinction between killing a person by pushing as opposed to pulling. Still, we all can acknowledge that something about pushing the heavy man feels different — it feels closer to murder. The trolley problem highlights a troubling possibility: that our moral intuitions are governed as much by arbitrary, or superficial, factors as they are by well-considered principles.
It’s not hard to see the parallels with a road scenario in which, say, swerving to avoid an auto collision means crashing into a crowd of café patrons. But many experts have been quick to point out that trolley problem scenarios are already rare occurrences; driverless tech, which promises to be safer than any human operator, will only make them more so. Impeding the spread of a technology that could yield innumerable benefits over such fears would be hysterical, they claim.
This response has merit, but its narrow focus on automotive safety ignores a broader issue: The trolley problem pervades American jurisprudence. Among other things, the most common role of a civil jury is to determine whether a defendant’s actions, in shifting costs and risks from one party to another, were reasonable. To make these determinations, juries are usually instructed to apply a rough-and-ready mathematical formula known as the Hand rule (styled after an early 20th century century judge named — yes, really! — Learned Hand) that balances the interests of those who were affected by the defendant’s actions — the quintessence of trolley problem scenarios. Of course, you don’t need to be a lawyer to read that and spot the inconsistency: If we can find the right answer by simply plugging values into an equation, then why do we even need juries?
-
RELATED STORIES
The truth is that the Hand rule, while providing an acceptable answer most of the time, will sometimes provide one that is atrocious. It can fail because, again, our moral intuitions are not always satisfied by adherence to loftier principles. When that happens, the legal system relies upon juries to find some wiggle room in the equation and return a verdict that is compatible with the public’s common sense of justice.
With driverless cars, however, there can be no wiggle room. Like any computer, a driverless car will not do anything unless instructed. A programmer can’t simply give it instructions for most scenarios and avoid thinking about edge cases. At the same time, a driverless car must make decisions within a fraction of a second. There is no opportunity to present the circumstances to an external, human “road jury” for review. Thus, the instructions must stand on their own merits. Someone will have to propose (or, at least, accept when an algorithm proposes) an explicit, unambiguous rule for when to pull the lever, push the heavy man, or swerve into the café. Society must take the trolley problem seriously not because driverless cars shouldn’t be fielded until it is solved, but because driverless cars will compel a solution — and the values embodied by that solution will likely be adopted across a number of important civic arenas, including the law.
But who, then, can be relied upon to solve the trolley problem prudently? At least one academic has argued that lawyers will come to the rescue. The problem with this notion is that lawyers have spent the better part of our profession’s history running away from providing definitive answers to difficult problems. Attorneys have even invented an adage to make this abrogation seem responsible: “Hard cases make bad law,” it’s said. In truth, the lawyers-will-save-us argument has the direction of causality backwards: The impact of the law will not be felt upon the trolley problem; rather, the impact of the trolley problem, and its solution, will be felt upon the law — for example, in how juries are instructed to determine whether someone behaved reasonably.
It’s tempting to hope that someone else will come along and solve the trolley problem. After all, finding a solution requires confronting some uncomfortable truths about one’s moral sensibilities. Imagine, for instance, that driverless cars are governed by a simple rule: minimize casualties. Occasionally, this rule may lead to objectionable results — e.g., mowing down a mother and her two children on the sidewalk rather than hitting four adults who have illegally run into the street. So, the rule might be augmented with a proviso: Minimize casualties, unless one party put itself in danger.
But what if the choice is between four jaywalking men and three jaywalking mothers — or four jaywalking mothers, or simply four jaywalking women? Reasonable people could disagree over what outcome would be acceptable in these scenarios and struggle to rationalize their positions. Hard cases don’t make bad law, they make bad jurists, ones who are afraid to admit that their reasoning is often driven by selfishness, sentimentality, or social pressures.
Despite these challenges, society should resist outsourcing its moral codification. If the trolley problem’s answer is to reflect, or at least be informed by, the rich diversity of experiences, attitudes, and opinions in America’s communities, it is crucial that everyone participates in the process. The experts may say that the trolley problem is nothing to fret over, but they’re forgetting that cars are, first and foremost, vehicles. Even if you’re wholly unconcerned with how driverless cars perform while you are in them, you should be very concerned with where you end up when you step out of them. In this case, they’re driving society towards legal and political reforms that should not be ignored.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.