Link Search Menu Expand Document

Mixed Evidence for a Dual-Process Theory of Ethical Cognition

If the slides are not working, or you prefer them full screen, please try this link.

Notes

Greene et al’s Dual-Process Theory

Greene et al offer a dual-process theory of ethical cognition:

‘this theory associates controlled cognition with utilitarian (or consequentialist) moral judgment aimed at promoting the “greater good” (Mill, 1861/1998) while associating automatic emotional responses with competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’ (Greene, 2015, p. 203)

The theory was developed in part to explain otherwise apparently anomalous responses to moral dilemmas. In particular, people have substantially different attitudes to killing one person in order to save several others depending on whether the killing involves pressing a switch (as in the Switch dilemma) or whether it involves dropping someone through a trapdoor into the path of great danger (as in the Footbridge dilemma).[1]

What is the explanation Greene et al’s theory offers?

‘this pattern of judgment [Switch—yes; Footbridge—no] reflects the outputs of distinct and (in some cases) competing neural systems [...] The more “personal” harmful action in the footbridge case, pushing the man off the footbridge, triggers a relatively strong negative emotional response, whereas the relatively impersonal harmful action in the switch case does not.’ (Greene, 2015, p. 203—4)

Mixed Behavioural Evidence for This Dual-Process Theory

One prediction of the theory is that increasing time pressure should increase the influence of automatic emotional processes relative to the influence of controlled cognition, which in turn should make responses that are characteristically deontological more likely.

This prediction is supported by (Suter & Hertwig, 2011), among others.[2] But Bago & De Neys (2019) consider what happens when subjects first make a moral judgement under time pressure and extraneous cognitive load and then, just after, make another moral judgement (in answer to the same question) with no time pressure and no extraneous cognitive load. They report:

‘Our critical finding is that although there were some instances in which deliberate correction occurred, these were the exception rather than the rule. Across the studies, results consistently showed that in the vast majority of cases in which people opt for a [consequentialist] response after deliberation, the [consequentialist] response is already given in the initial phase’ (Bago & De Neys, 2019, p. 1794).

Rosas & Aguilar-Pardo (2020) find, conversely to what Greene et al’s theory predicts, that subjects are less likely to give characteristically deontological responses under extreme time pressure.

The converse finding of Rosas & Aguilar-Pardo (2020) is not theoretically unmotivated—there are also some theoretical reasons for holding that automatic emotional processes should support characteristically utilitarian responses (Kurzban, DeScioli, & Fein, 2012).

As there is a substantial body of neuropsychological evidence in favour of Greene et al’s theory (reviewed in Greene, 2014), its defenders may be little moved by the mixed behavioural evidence. But there is a reason, not decisive but substantial, to expect mixed evidence more generally ...

Methodological Challenge

The mixed pattern of evidence for and against Greene et al’s theory might be explained by their choice of vignettes using trolley cases as stimuli. Waldmann, Nagel, & Wiegmann (2012, p. 288) offers a brief summary of some factors which have been considered to influence responses including:

  • whether an agent is part of the danger (on the trolley) or a bystander;
  • whether an action involves forceful contact with a victim;
  • whether an action targets an object or the victim;
  • how far the agent is from the victim;[3] and
  • how the victim is described.

Other factors include whether there are irrelevant alternatives (Wiegmann, Horvath, & Meyer, 2020); and order of presentation (Schwitzgebel & Cushman, 2015).

They comment:

‘A brief summary of the research of the past years is that it has been shown that almost all these confounding factors influence judgments, along with a number of others [...] it seems hopeless to look for the one and only explanation of moral intuitions in dilemmas. The research suggests that various moral and nonmoral factors interact in the generation of moral judgments about dilemmas’ (Waldmann et al., 2012, pp. 288, 290).

For proponents of Greene et al’s view, this might be taken as encouragement. Yes, the evidence is a bit mixed. But perhaps what appears to be evidence falsifying predictions of the view will turn out to be merely a consequence of extraneous, nonmoral factors influencing judgements.

Alternatively, Waldmann et al.’s observation could be taken to suggest that few if any of the studies relying on dilemmas presented in vignette form provide reliable evidence about moral factors since they do not adequately control for extraneous, nonmoral factors. As an illustration, Gawronski, Armstrong, Conway, Friesdorf, & Hütter (2017) note that aversion to killing (which would be characteristically deontological) needs to be separated from a preference for inaction. When considering only aversion to killing, time pressure appears to result in characteristically deontological responses, which would support Greene et al’s theory (Conway & Gawronski, 2013). But when aversion to killing and a preference for inaction are considered together, Gawronski et al. (2017) found evidence only that time pressure increases preferences for inaction.

While the combination of mixed behavioural evidence and methodological challenges associated with using dilemmas presented in vignettes does not provide a case for rejecting Greene et al’s view, it does motivate considering fresh alternatives.

Suggestion

While we have not seen decisive evidence against it, we have seen enough to motivate seeking alternatives.

Glossary

characteristically deontological : According to Greene, a judgement is characteristically deontological if it is one in ‘favor of characteristically deontological conclusions (eg, “It’s wrong despite the benefits”)’ (Greene, 2007, p. 39). According to Gawronski et al. (2017, p. 365), ‘a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms.’
dual-process theory : Any theory concerning abilities in a particular domain on which those abilities involve two or more processes which are distinct in this sense: the conditions which influence whether one process occurs differ from the conditions which influence whether another occurs.
Footbridge : A dilemma; also known as Drop. A runaway trolley is about to run over and kill five people. You can hit a switch that will release the bottom of a footbridge and one person will fall onto the track. The trolley will hit this person, slow down, and not hit the five people further down the track. Is it okay to hit the switch?
Switch : A dilemma; also known as Trolley. A runaway trolley is about to run over and kill five people. You can hit a switch that will divert the trolley onto a different set of tracks where it will kill only one. Is it okay to hit the switch?
trolley cases : Scenarios designed to elicit puzzling or informative patterns of judgement about how someone should act. Examples include Trolley, Transplant, and Drop. Their use was pioneered by Foot (1967) and Thomson (1976), who aimed to use them to understand ethical considerations around abortion and euthanasia.

References

Bago, B., & De Neys, W. (2019). The Intuitive Greater Good: Testing the Corrective Dual Process Model of Moral Cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801. https://doi.org/10.1037/xge0000533
Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology, 104(2), 216–235. https://doi.org/10.1037/a0031021
Crockett, M. J. (2013). Models of morality. Trends in Cognitive Sciences, 17(8), 363–366. https://doi.org/10.1016/j.tics.2013.06.005
Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5–15.
Gawronski, B., Armstrong, J., Conway, P., Friesdorf, R., & Hütter, M. (2017). Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making. Journal of Personality and Social Psychology, 113(3), 343–376. https://doi.org/10.1037/pspa0000086
Gawronski, B., & Beer, J. S. (2017). What makes moral dilemma judgments “utilitarian” or “deontological”? Social Neuroscience, 12(6), 626–632. https://doi.org/10.1080/17470919.2016.1248787
Gawronski, B., Conway, P., Armstrong, J., Friesdorf, R., & Hütter, M. (2018). Effects of incidental emotions on moral dilemma judgments: An analysis using the CNI model. Emotion, 18(7), 989–1008. https://doi.org/10.1037/emo0000399
Greene, J. D. (2007). The Secret Joke of Kant’s Soul. In W. Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 3 (pp. 35–79). MIT Press.
Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726. https://doi.org/10.1086/675875
Greene, J. D. (2015). The cognitive neuroscience of moral judgment and decision making. In The moral brain: A multidisciplinary perspective (pp. 197–220). Cambridge, MA, US: MIT Press.
Hrdy, S. B. (1979). Infanticide among animals: A review, classification, and examination of the implications for the reproductive strategies of females. Ethology and Sociobiology, 1(1), 13–40. https://doi.org/10.1016/0162-3095(79)90004-9
Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptations for moral judgment. Evolution and Human Behavior, 33(4), 323–333.
Nagel, J., & Waldmann, M. R. (2013). Deconfounding distance effects in judgments of moral obligation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(1), 237.
Rosas, A., & Aguilar-Pardo, D. (2020). Extreme time-pressure reveals utilitarian intuitions in sacrificial dilemmas. Thinking & Reasoning, 26(4), 534–551. https://doi.org/10.1080/13546783.2019.1679665
Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137. https://doi.org/10.1016/j.cognition.2015.04.015
Suter, R. S., & Hertwig, R. (2011). Time and moral judgment. Cognition, 119(3), 454–458. https://doi.org/10.1016/j.cognition.2011.01.018
Thomson, J. J. (1976). Killing, Letting Die, and The Trolley Problem. The Monist, 59(2), 204–217. https://doi.org/10.5840/monist197659224
Trémolière, B., & Bonnefon, J.-F. (2014). Efficient Kill the Cognitive Demands on Counterintuitive Moral Utilitarianism. Personality and Social Psychology Bulletin, 124(3), 379–384. https://doi.org/10.1177/0146167214530436
Waldmann, M. R., Nagel, J., & Wiegmann, A. (2012). Moral Judgment. In K. J. Holyoak & R. G. Morrison (Eds.), The oxford handbook of thinking and reasoning (pp. 274–299). Oxford: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199734689.013.0019
Wiegmann, A., Horvath, J., & Meyer, K. (2020). Intuitive expertise and irrelevant options. Oxford Studies in Experimental Philosophy, 3, 275–310.

Endnotes

  1. See Greene (2015, p. 203): ‘We developed this theory in response to a long-standing philosophical puzzle ... Why do people typically say “yes” to hitting the switch, but “no” to pushing?’ ↩︎

  2. See also Trémolière & Bonnefon (2014) and Conway & Gawronski (2013) (who manipulated cognitive load). ↩︎

  3. After this review was published, Nagel & Waldmann (2013) provided substantial evidence that distance may not be a factor influencing moral intuitions after all (the impression that it does was based on confounding distance with factors typically associated with distance such as group membership and efficacy of action). ↩︎