Would you sacrifice the lives of the few, to save the many? Illuminating research into people’s moral decision-making process and how this can help inform organisational strategy
Moral conflict is a part of daily life and is usually the result of people having different standards of what is right and wrong. We often give fundamentally different answers to serious moral questions and sometimes the line between what is right and wrong can become blurred.
To understand the direction taken by people experiencing moral dilemmas, our research – the product of a collaboration between Harvard University, Harvard Business School, UCL School of Management and the Hebrew University of Jerusalem – presented scenarios to a total of 1,163 people. These scenarios involve a runaway train heading towards a group of five people. Their lives can be spared, but only at the expense of another individual’s life. One option presented is that an individual can flip a switch and divert the train to a different track and, ultimately, killing one person. In another option presented, people are asked if they would physically push, or drop, a person off a footbridge so that the train would hit the fallen person and slow down. In both cases, the action would allow the group of individuals to escape.
The three principles: utilitarian, action and intention
When faced with this kind of moral dilemma, people usually respond in one of three ways. The first is the utilitarian principle, which is when people evaluate actions based on their net benefit and seek to maximise the good. In this case, both the ‘divert’ and ‘drop’ dilemmas should be identical, as in both cases one should sacrifice an individual to save the many.
The second response is the action principle, which prohibits any harmful reactions, regardless of its consequences, positing that activity causing harm is worse than passively allowing it. In this case, one should not act in either dilemmas as both ‘diverting’ and ‘dropping’ is causing harm to innocent individuals, whereas inaction merely allows harm to occur.
The final response is the intention principle, this can be summarised as not doing intentional harm. This prohibits actions that are intended to bring about harm, especially actions that intend to use a person as a means to an end. However, it does allow actions that involve unintended but foreseen incidental harm, as long as it provides a better outcome. In this case, one would reject action in the ‘drop’ version, where harm to the individual is intended to save the others, but take action in the ‘divert’ version as harm is foreseen but not intended.
Measuring the moral decision-making process
The original problems outlined above were then modified to create a one-to-one association between options and moral principles, by reducing the number of lives saved in the ‘divert’ case to three, and having the number in the ‘drop’ case stay at five. This is to ensure that only the ‘drop’ option will be supported by the utilitarianism principle (as more lives are saved this way) and the ‘divert’ option is only supported by the intention principle (as harm remains unintentional in this case). The scenarios were then presented in two ways: under separate evaluation, where a person makes a ‘yes’ or ‘no’ decision about one option; and joint evaluation, where an individual makes a single decision, with all of their options laid out simultaneously.
When all options were laid out on the table, 18% of participants said they would flip a switch to divert the train to a different track, killing one but saving three; 37% said they would take no action at all, as any action would bring harm to an individual, whereas inaction merely allows harm to occur. Finally, 45% said they would take a utilitarian approach and sacrifice the life of one to save the many. They would drop an individual in front of a train to prevent it from hitting people, and hence be responsible for killing one person, but also for saving five lives.
However, under separate evaluation, there was a shift in preference. People were more likely to divert over doing nothing (76%) than to endorse dropping over nothing (41%) despite the fact that the drop option saves more lives.
The results from our study suggest that people tend to use an action-based evaluation more when they evaluate options one at a time. Yet, they shift to use an outcome-based evaluation towards the more beneficial option when all of them are presented simultaneously. Individuals feel that if they have to choose between two bad actions, they would prefer the action that saves more lives, even if the action itself is more aversive.
The moral divide
The study also revealed that when faced with moral dilemmas, people tend to split into two distinct camps, in which utilitarian harm is either always justified, or never justified. This polarisation in moral judgment could be as a result of principled reasoning and could be beneficial or worrying, depending on the context. When the divide clarifies issues of disagreements, it could be beneficial. But when there is a situation where creating a coalition and reaching an agreement is important, this hard divide is a cause for worry. Whether this has an effect on other moral, political and organisational dilemmas is an issue for further research.
Moral judgments are a part of daily life and decisions, especially in organisations. Managers and employees are often confronted and conflicted in the pursuit of profit or growth-oriented outcomes versus the means of achieving the outcomes, which may entail confusion and even harm to various stakeholders.
Managerial lessons
So, how can managers ensure that when dealing with moral conflicts there is little disruption? The study shows that people are more likely to follow their moral instincts under separate evaluation – and to deliberate more under joint evaluation. This deliberation has the result of people becoming more likely to seek out decisions with a clear, accessible principled basis. However, the study also shows that organisations hoping to use joint evaluation to nudge people towards more beneficial choices, in settings that involve aversive moral choices, may also encounter increased inaction.
For this reason, managers should strive to provide key decision makers within their organisation with all the information required to make a decision that will benefit the organisation as a whole and consider asking the individual to set aside their inaction preference.
Moral conflict is not something that can be analysed easily, each person will react differently to the dilemma they are experiencing. Our research highlights that when people face moral conflict with simultaneous choices, the majority will choose the utilitarian approach because it yields the best outcome, even if it is the most aversive action. It is a leader’s job to make sure their teams have all the information to make the most beneficial action. Simultaneous choices can also lead you to make consistent and principled choices. This is an additional advantage to an organisation, because consistent decisions are less likely to change with each new bit of information that arrives, as people will have considered the decision adequately to begin with. Such decisions are also easier to defend against critique.
About the authors:
Chia-Jung Tsay is Associate Professor at UCL School of Management.
Netta Barak-Corren is Assistant Professor of Law at the Hebrew University of Jerusalem.
Fiery Cushman is Assistant Professor of Psychology at Harvard University.
Max H Bazerman is Professor of Business Administration at Harvard Business School.