This post first appeared March 27, 2015. It’s not irrelevant March 25, 2020.
Stop me if you’ve heard this one. A trolley carrying five school children is headed for a cliff. You happen to be standing at the switch, and you could save their lives by diverting the trolley to another track. But there he is – an innocent fat man, picking daisies on that second track, oblivious to the rolling thunder (potentially) hurtling his way. Divert the trolley, and you save the kids and kill a person. Do nothing, and you have killed no one but five children are dead. Which is the greater moral good?
This kind of thought experiment is known as a sacrificial dilemma, and it’s useful for teaching college freshmen about moral philosophy. What you maybe shouldn’t do is ask a guy on the street to answer these questions in an fMRI machine, and then use his answers to draw grand conclusions about the neurophysiological correlates of moral reasoning. But that’s exactly what some neuroscientists are doing. The trouble is, their growing body of research is built on a philosophical house of cards: sacrificial dilemmas are turning out to be exactly the opposite of what we thought they were. Guy Kahane wants to divert this trolley before it drives off a cliff.
Kahane, deputy director of Oxford’s Uehiro Centre for Practical Ethics, has never been a big fan of the sacrificial dilemma. The main problem, he says, is that it has been misapplied to situations it was never intended for.
Philosophically, the sacrificial dilemma has a narrow purpose. Your choice supposedly illuminates whether you fall into one of two camps on moral reasoning: choose to hypothetically end a life to save a few more, and yours is described as utilitarian judgment. Reject it, and you are said to be making non-utilitarian (“deontological”) judgments. Roughly translated, the utilitarian is concerned primarily with outcomes, while the deontologist has a morally absolute point of view that holds that you couldn’t even tell a lie to save someone’s life, because it’s wrong to tell a lie (Kant being the most extreme member of this camp). But when I say “roughly translated” I really mean roughly: to be truly understood on their own merits, these terms need the full battery of philosophical context.
So what do philosophers mean by utilitarianism? It means that you’re the kind of person who, as John Stuart Mill prescribed, is generally, genuinely concerned with the greater good. That you are capable of “transcend[ing] [y]our narrow, natural sympathies … to promote the greater good of humanity as a whole, or even the good of all sentient beings”. It’s an algorithmic way of seeing the world in which all your actions must aggressively maximise the good.
That’s a demanding moral framework! Let’s separate it from what I’ll refer to from now on as “scarequotes utilitarianism”, embodied by the reaction of “what, just kill the fat guy.”
Over time the distinction between the two has been been flattened because of inappropriate overuse of these sacrificial dilemmas. As a result we’ve begun to assume that “what, just kill the fat guy” is shorthand for an entire moral compass tuned to the kind of “God’s-eye” concern for the greater good that defines utilitarian ethics. And so, in addition to being “complex, far-fetched, and convoluted”, Kahane says, sacrificial dilemmas have been misunderstood and misapplied.
But while it’s absurd to use them to pigeonhole average Joe non-philosophers into the utilitarian/deontological boxes, could sacrificial dilemmas still offer some small glimmer of insight into the average person’s real-world moral reasoning? For example, might a person who answers “just kill the fat guy” — while not also believing, in true utilitarian fashion, that she should maximise welfare by donating 90 percent of her money to distant strangers — be more likely to agree that she should give to charity?
To find out, Kahane teamed up with some other Oxford philosophers, including Brian Earp, Jim Everett and Julian Savulescu. They designed a series of experiments to examine exactly how well the answer you give to the sacrificial dilemmas maps to your larger moral framework.
The results, published in January in the journal Cognition, were not encouraging.
Not only does a “utilitarian” response (“just kill the fat guy”) not actual reflect a utilitarian outlook, it may actually be driven by broad antisocial tendencies, such as lowered empathy and a reduced aversion to causing someone harm. Which makes a kind of sense: in the real world, given the choice between two kinds of harm, most people wouldn’t be able to cost it up quite so coldly. In fact, respondents who “killed the fat guy” also scored high on a question that asked them to assess how likely they would be to actually, in real life, kill the fat guy (and other sacrificial dilemmas, like the one where you must smother a crying baby to save a group of hiding refugees). They similarly aced the psychopath test (featuring statements like “success is based on survival of the fittest; I am not concerned about the losers”) and flunked the empathy test (“When I see someone being taken advantage of, I feel kind of protective towards them”). As you might expect, “scarequote utilitarians” scored low on “concern for the greater good”. Taken together, the results of their experiments caused the authors to conclude that answering in the “utilitarian” fashion may reflect the inner workings of a broadly amoral mind.
So why should anyone care about this apart from some philosophers breathing pretty thin air? Because in recent years, psychologists and neuroscientists have seized on these sacrificial dilemmas as a tool of choice for understanding how the brain deals with moral choices:
In the current literature, when subjects judge that it is acceptable to sacrifice one person to save a greater number, this is classified as a utilitarian judgment, and thought to reflect a utilitarian cost–benefit analysis, which is argued by some to be uniquely based in deliberative processes (Cushman, Young, & Greene, 2010), and even in a distinctive neural subsystem (Greene, 2008; Greene et al., 2004).
Neuroscientists have spent over a decade amassing research based on these types of thought experiments. In 2001, in one of the first neuroimaging studies of moral cognition, subjects in an fMRI were posed these dilemmas to draw deep conclusions about the neurophysical correlates of morality. It got a lot of attention. That attention led other researchers to follow suit with more studies. “Once a body of research grows around a paradigm, it is easier to build on it than to come up with a new experimental design,” Kahane wrote. “Soon everyone is using this paradigm, just because everyone else is.”
It wouldn’t be the first time neuroscience has stepped in it. In recent years, serious flaws in big studies’ design and reporting — the “dark patch” of the psychopath’s brain, the dead salmon whose brain appeared under an MRI scanner to spark to life when shown pictures of certain people — have led to questions about whether this discipline has much to add to science at all. This latest reliance on philosophical thought experiments is just asking for more trouble.
But this isn’t to say any investigation of what happens in the brains of people considering moral dilemmas is useless. Kahane just thinks we should jettison the useless sacrificial dilemmas and find something genuinely distinctive of utilitarian moral thinking. In a paper published in Social Neuroscience he recommends we drop the sacrificial dilemma — which is better at identifying B-school psychopaths than it is at identifying morality. Instead, we need to find new ways to suss out a person’s ability to “transcend our narrow focus on ourselves and those near and dear to us, and to extend our circle of concern to everyone, however geographically, temporally or even biologically distant.” Then neuroscientists can have at it with the brain mapping.
That might even give us a real moral compass for the 21st century. If you’re reading LWON, chances are you’re in the privileged position of being fairly insulated from climate change; your country will probably put adaptive measures in place to make sure you and your children never suffer. But climate change will devastate other parts of the world and kill people you’ve never met and their children. The US and their allies are drone bombing places you’ve never heard of. Your smartphone was made, and will be disassembled, in places you’ve never visited and don’t care about, and they’re polluted as hell. How do we start to make a better world? Not by defining morality in the scientific literature as a calculating numbers game.
A team of neuroscientists is on a trolley headed for a cliff. A lone philosopher stands at the switch…
Correction, 8 April 2015: post was updated to reflect the contribution of Jim Everett
Image credits
Ominous trolley: T photography / Shutterstock.com
Moral dilemma: shutterstock
Moral compass: shutterstock
Neuroscience balloon: shutterstock
What exactly is the fat guy doing on the track anyway? Flick the switch, and yell at him to move. If he has headphones in and doesn’t care about standing on track oblivious to the world around him, which at any point could have trolley hurtling towards him whether you flick the switch or not, well, that’s his problem isn’t it? Maybe he wants to die, maybe that’s why he’s on the track in the first place. Most people don’t just stand around on a track without being aware of what might be coming down it. The driver should be ringing the bell or sounding the horn, chances are he’ll hear it coming and jump out of the way at the last minute.
For the baby, stick your thumb in it’s mouth, or put a dummy in, or give it some milk. It’ll try suckling and hopefully shut up for a while.
That’s the main problem with these questions, artificially reducing your choices to one of two bad things is in no way representative of real life.
Neurotic scientists will always find neurotic answers and solutions to problems created by neurotic people.
Intellectualists will always be at the losing end.
Proper answers, responses and actions can only be offered by people who have full neurological connection between the three brain levels of consciousness: visceral—emotional—intellectual.
They are the ones who were born and brought up with love and had their natural human developmental needs fulfilled.
Such humans exist.
Years ago, I was chided by a philosophy/ethics professor for choosing wrong in the following dilemma (per his text based on an actual event and a little bit different than Singer’s animal-rights question): a lifeboat with a dead sailor, two live sailors and a dog. Who would the survivors eat first? My answer: because they have no idea how long they will be stranded at sea, they eat the dead body, saving the dog (as fresh meat) for later. No disrespect was intended.
But the professor was incensed – don’t eat people, no matter what.
I asked about the Donner party. He actually sat at his desk and didn’t say anything else.
What would be the *correct* answer? Is it even relevant (how many of us will find ourselves in such dire circumstances), when we can barely decide whether or not to use turn signals as common courtesy? “Kill the fat man” is so chillingly blithe. Could we lovingly admire the “fat” man’s appreciation of flora while regrettably running him over to save the lives of the children, and all of their potential? Could we use the dead sailor to troll for sharks?