Emotion and Reason in Moral Judgment
Rationalist philosophers such as Plato and Kant conceived of mature moral judgment as a rational enterprise, as a matter of appreciating abstract reasons that in themselves provide direction and motivation. In contrast to these philosophers, "sentimentalist" philosophers such as David Hume and Adam Smith argued that emotions are the primary basis for moral judgment. I believe that emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.
More specifically, I have proposed a "dual-process" theory of moral judgment according to which characteristically deontological moral judgments (judgments associated with concerns for "rights" and "duties") are driven by automatic emotional responses, while characteristically utilitarian or consequentialist moral judgments (judgments aimed at promoting the "greater good") are driven by more controlled cognitive processes. If I'm right, the tension between deontological and consequentialist moral philosohies reflects an underlying tension between dissociable systems in the brain. Many of my experiments employ moral dilemmas, adapted from the philosophical literature, that are designed to exploit this tension and reveal its psychological and neural underpinnings.
Moral Dilemmas and the "Trolley Problem"
My main line of experimental research began as an attempt to understand the "Trolley Problem," which was originally posed by the philosophers Philippa Foot and Judith Jarvis Thomson.
First, we have the switch dilemma: A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one? Most people say "Yes."
Then we have the footbridge dilemma: Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley. Is that morally permissible? Most people say "No."
These two cases create a puzzle for moral philosophers: What makes it okay to sacrifice one person to save five others in the switch case but not in the footbridge case? There is also a psychological puzzle here: How does everyone know (or "know") that it's okay to turn the trolley but not okay to push the man off the footbridge?
According to my dual-process theory of moral judgment, our differing responses to these two dilemmas reflect the operations of at least two distinct psychological/neural systems. On the one hand, there is a system that tends to think about both of these problems in utilitarian terms: Better to save as many lives as possible. The operations of this system are more controlled, perhaps more reasoned, and tend to be relatively unemotional. This system appears to depend on the dorsolateral prefrontal cortex, a part of the brain associated with "cognitive control" and reasoning.
On the other hand, there is a different neural system that responds very differently to these two dilemmas. This system typically responds with a relatively strong, negative emotional response to the action in the footbridge dilemma, but not to the action in the switch dilemma. When this more emotional system is engaged, its responses tend to dominate people's judgments, explaining why people tend to make utilitarian judgments in response to the switch dilemma, but not in response to the footbridge dilemma.
If you make the utilitarian judgment sufficiently attractive, you can elicit a prolonged competition between these two systems. Consider the crying baby dilemma: It's war time, and you are hiding in a basement with several other people. The enemy soldiers are outside. Your baby starts to cry loudly, and if nothing is done the soldiers will find you and kill you, your baby, and everyone else in the basement. The only way to prevent this from happening is to cover your baby's mouth, but if you do this the baby will smother to death. Is it morally permissible to do this?
According to the dual-process theory, this dilemma is difficult because it, like the footbridge dilemma elicits a strong negative emotional response ("Don't kill the baby!"), while at the same time eliciting a comparably compelling utilitarian response from the other system ("But if you don't kill the baby, everyone dies.") Difficult dilemmas like this one tend to elicit increased activity in the anterior cingulate cortex, a brain region associated with "response conflict." And when people make utilitarian judgments in response to these difficult dilemmas, they exhibit increased activity in anterior regions of the dorsolateral prefrontal cortex.
The Moral Significance of Moral Psychology
My interest in understanding how the moral mind/brain works is in part driven by good-old-fashioned curiosity, but I also believe that we can uses a scientific understanding of morality to make the world better. As everyone knows, we humans are beset by a number of serious social problems: war, terrorism, the destruction of the environment, etc. Many people think that the cure for these ills is a heaping helping of common sense morality: "If only people everywhere would do what they know, deep down, is right, we'd all get along."
I believe that the opposite is true, that the aforementioned problems are a product of well-intentioned people abiding by their respective common senses and that the only long-run solution to these problems is for people to develop a healthy distrust of moral common sense. This is largely because our social instincts were not designed for the modern world. Nor, for that matter, were they designed to promote peace and happiness in the world for which they were designed, the world of our hunter-gatherer ancestors.
My goal as a scientist, then, is to reveal our moral thinking for what it is: a complex hodgepodge of emotional responses and rational (re)constructions, shaped by both genetic and cultural influences, that do some things well and other things extremely poorly. My hope is that by understanding how we think, we can teach ourselves to think better, i.e. in ways that better serve the needs of humanity as a whole.
For a short introduction to some of these ideas, you can download this article. For a longer, and more philosophically contentious, presentation, I recommend this article. Then there is my philosophy dissertation, which you are welcome to slog through. Finally, I am writing a book about these issues, which I expect to be published in 2012. I also have a related paper about the problem of free will and legal responsibility (co-authored with my former advisor, Jonathan Cohen).