Mixed Evidence for a Dual-Process Theory of Ethical Cognition
If the slides are not working, or you prefer them full screen, please try this link.
Notes
Greene et al’s Dual-Process Theory
Greene et al offer a dual-process theory of ethical cognition:
‘this theory associates controlled cognition with utilitarian (or consequentialist) moral judgment aimed at promoting the “greater good” (Mill, 1861/1998) while associating automatic emotional responses with competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’ (Greene, 2015, p. 203)
The theory was developed in part to explain otherwise apparently anomalous responses to moral dilemmas. In particular, people have substantially different attitudes to killing one person in order to save several others depending on whether the killing involves pressing a switch (as in the Switch dilemma) or whether it involves dropping someone through a trapdoor into the path of great danger (as in the Footbridge dilemma).[1]
What is the explanation Greene et al’s theory offers?
‘this pattern of judgment [Switch—yes; Footbridge—no] reflects the outputs of distinct and (in some cases) competing neural systems [...] The more “personal” harmful action in the footbridge case, pushing the man off the footbridge, triggers a relatively strong negative emotional response, whereas the relatively impersonal harmful action in the switch case does not.’ (Greene, 2015, p. 203—4)
Mixed Behavioural Evidence for This Dual-Process Theory
One prediction of the theory is that increasing time pressure should increase the influence of automatic emotional processes relative to the influence of controlled cognition, which in turn should make responses that are characteristically deontological more likely.
This prediction is supported by (Suter & Hertwig, 2011), among others.[2] But Bago & De Neys (2019) consider what happens when subjects first make a moral judgement under time pressure and extraneous cognitive load and then, just after, make another moral judgement (in answer to the same question) with no time pressure and no extraneous cognitive load. They report:
‘Our critical finding is that although there were some instances in which deliberate correction occurred, these were the exception rather than the rule. Across the studies, results consistently showed that in the vast majority of cases in which people opt for a [consequentialist] response after deliberation, the [consequentialist] response is already given in the initial phase’ (Bago & De Neys, 2019, p. 1794).
Rosas & Aguilar-Pardo (2020) find, conversely to what Greene et al’s theory predicts, that subjects are less likely to give characteristically deontological responses under extreme time pressure.
The converse finding of Rosas & Aguilar-Pardo (2020) is not theoretically unmotivated—there are also some theoretical reasons for holding that automatic emotional processes should support characteristically utilitarian responses (Kurzban, DeScioli, & Fein, 2012).
As there is a substantial body of neuropsychological evidence in favour of Greene et al’s theory (reviewed in Greene, 2014), its defenders may be little moved by the mixed behavioural evidence. But there is a reason, not decisive but substantial, to expect mixed evidence more generally ...
Methodological Challenge
The mixed pattern of evidence for and against Greene et al’s theory might be explained by their choice of vignettes using trolley cases as stimuli. Waldmann, Nagel, & Wiegmann (2012, p. 288) offers a brief summary of some factors which have been considered to influence responses including:
- whether an agent is part of the danger (on the trolley) or a bystander;
- whether an action involves forceful contact with a victim;
- whether an action targets an object or the victim;
- how far the agent is from the victim;[3] and
- how the victim is described.
Other factors include whether there are irrelevant alternatives (Wiegmann, Horvath, & Meyer, 2020); and order of presentation (Schwitzgebel & Cushman, 2015).
They comment:
‘A brief summary of the research of the past years is that it has been shown that almost all these confounding factors influence judgments, along with a number of others [...] it seems hopeless to look for the one and only explanation of moral intuitions in dilemmas. The research suggests that various moral and nonmoral factors interact in the generation of moral judgments about dilemmas’ (Waldmann et al., 2012, pp. 288, 290).
For proponents of Greene et al’s view, this might be taken as encouragement. Yes, the evidence is a bit mixed. But perhaps what appears to be evidence falsifying predictions of the view will turn out to be merely a consequence of extraneous, nonmoral factors influencing judgements.
Alternatively, Waldmann et al.’s observation could be taken to suggest that few if any of the studies relying on dilemmas presented in vignette form provide reliable evidence about moral factors since they do not adequately control for extraneous, nonmoral factors. As an illustration, Gawronski, Armstrong, Conway, Friesdorf, & Hütter (2017) note that aversion to killing (which would be characteristically deontological) needs to be separated from a preference for inaction. When considering only aversion to killing, time pressure appears to result in characteristically deontological responses, which would support Greene et al’s theory (Conway & Gawronski, 2013). But when aversion to killing and a preference for inaction are considered together, Gawronski et al. (2017) found evidence only that time pressure increases preferences for inaction.
While the combination of mixed behavioural evidence and methodological challenges associated with using dilemmas presented in vignettes does not provide a case for rejecting Greene et al’s view, it does motivate considering fresh alternatives.
Suggestion
While we have not seen decisive evidence against it, we have seen enough to motivate seeking alternatives.
Glossary
References
Endnotes
See Greene (2015, p. 203): ‘We developed this theory in response to a long-standing philosophical puzzle ... Why do people typically say “yes” to hitting the switch, but “no” to pushing?’ ↩︎
See also Trémolière & Bonnefon (2014) and Conway & Gawronski (2013) (who manipulated cognitive load). ↩︎
After this review was published, Nagel & Waldmann (2013) provided substantial evidence that distance may not be a factor influencing moral intuitions after all (the impression that it does was based on confounding distance with factors typically associated with distance such as group membership and efficacy of action). ↩︎