Minimal Virtues in Ethical Cognition
If the slides are not working, or you prefer them full screen, please try this link.
Notes
Greene et al’s Dual-Process Theory
Greene et al offer a dual-process theory of ethical cognition:
‘this theory associates controlled cognition with utilitarian (or consequentialist) moral judgment aimed at promoting the “greater good” (Mill, 1861/1998) while associating automatic emotional responses with competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’ (Greene, 2015, p. 203)
The theory was developed in part to explain otherwise apparently anomalous responses to moral dilemmas. In particular, people have substantially different attitudes to killing one person in order to save several others depending on whether the killing involves pressing a switch (as in the Switch dilemma) or whether it involves dropping someone through a trapdoor into the path of great danger (as in the Footbridge dilemma).[1]
What is the explanation Greene et al’s theory offers?
‘this pattern of judgment [Switch—yes; Footbridge—no] reflects the outputs of distinct and (in some cases) competing neural systems [...] The more “personal” harmful action in the footbridge case, pushing the man off the footbridge, triggers a relatively strong negative emotional response, whereas the relatively impersonal harmful action in the switch case does not.’ (Greene, 2015, p. 203—4)
Mixed Behavioural Evidence for This Theory
One prediction of the theory is that increasing time pressure should increase the influence of automatic emotional processes relative to the influence of controlled cognition, which in turn should make responses that are characteristically deontological more likely.
This prediction is supported by (Suter & Hertwig, 2011), among others.[2] But Bago & De Neys (2019) consider what happens when subjects first make a moral judgement under time pressure and extraneous cognitive load and then, just after, make another moral judgement (in answer to the same question) with no time pressure and no extraneous cognitive load. They report:
‘Our critical finding is that although there were some instances in which deliberate correction occurred, these were the exception rather than the rule. Across the studies, results consistently showed that in the vast majority of cases in which people opt for a [consequentialist] response after deliberation, the [consequentialist] response is already given in the initial phase’ (Bago & De Neys, 2019, p. 1794).
Rosas & Aguilar-Pardo (2020) find, conversely to what Greene et al’s theory predicts, that subjects are less likely to give characteristically deontological responses under extreme time pressure.
The converse finding of Rosas & Aguilar-Pardo (2020) is not theoretically unmotivated—there are also some theoretical reasons for holding that automatic emotional processes should support characteristically utilitarian responses (Kurzban, DeScioli, & Fein, 2012).
As there is a substantial body of neuropsychological evidence in favour of Greene et al’s theory (reviewed in Greene, 2014), its defenders may be little moved by the mixed behavioural evidence. But there is a reason, not decisive but substantial, to expect mixed evidence more generally ...
Methodological Challenge
The mixed pattern of evidence for and against Greene et al’s theory might be explained by their choice of vignettes using trolley cases as stimuli. Waldmann, Nagel, & Wiegmann (2012, p. 288) offers a brief summary of some factors which have been considered to influence responses including:
- whether an agent is part of the danger (on the trolley) or a bystander;
- whether an action involves forceful contact with a victim;
- whether an action targets an object or the victim;
- how far the agent is from the victim;[3] and
- how the victim is described.
Other factors include whether there are irrelevant alternatives (Wiegmann, Horvath, & Meyer, 2020); and order of presentation (Schwitzgebel & Cushman, 2015).
They comment:
‘A brief summary of the research of the past years is that it has been shown that almost all these confounding factors influence judgments, along with a number of others [...] it seems hopeless to look for the one and only explanation of moral intuitions in dilemmas. The research suggests that various moral and nonmoral factors interact in the generation of moral judgments about dilemmas’ (Waldmann et al., 2012, pp. 288, 290).
For proponents of Greene et al’s view, this might be taken as encouragement. Yes, the evidence is a bit mixed. But perhaps what appears to be evidence falsifying predictions of the view will turn out to be merely a consequence of extraneous, nonmoral factors influencing judgements.
Alternatively, Waldmann et al.’s observation could be taken to suggest that few if any of the studies relying on dilemmas presented in vignette form provide reliable evidence about moral factors since they do not adequately control for extraneous, nonmoral factors. As an illustration, Gawronski, Armstrong, Conway, Friesdorf, & Hütter (2017) note that aversion to killing (which would be characteristically deontological) needs to be separated from a preference for inaction. When considering only aversion to killing, time pressure appears to result in characteristically deontological responses, which would support Greene et al’s theory (Conway & Gawronski, 2013). But when aversion to killing and a preference for inaction are considered together, Gawronski et al. (2017) found evidence only that time pressure increases preferences for inaction.
While the combination of mixed behavioural evidence and methodological challenges associated with using dilemmas presented in vignettes does not provide a case for rejecting Greene et al’s view, it does motivate considering fresh alternatives.
The Search for Minimal Virtues
I do not have a minimal model that would be useful for formulating conjectures about ethical cognition but I would like to share some ideas about where we might find one.
Step 1. Abandon the deontological/utilitarian idea with the aim, eventually, of finding a minimal model of the ethical.
Step 2. What would a minimal model be a model of? Haidt & Graham (2007) claim that there are five evolutionarily ancient, psychologically basic abilities linked to:
- harm/care
- fairness (including reciprocity)
- in-group loyalty
- respect for authority
- purity, sanctity
Step 3. Which processes might implement a minimal model? One possibility is to consider the habitual processes which support selecting goals (compare Crockett, 2013 and Cushman, 2013).
Habitual processes simplify the problem of goal selection by representing the world as involving only stimulus—action links. They are characterised by Thorndyke’s Law of Effect:
‘The presentation of an effective [rewarding] outcome following an action [...] reinforces a connection between the stimuli present when the action is performed and the action itself so that subsequent presentations of these stimuli elicit the [...] action as a response’ (Dickinson, 1994, p. 48).
When the environment and an agent’s preferences are sufficiently stable, habitual processes can approximate the computation of expected utility without the computational costs involved in identifying who probably different action outcomes are and how desirable each outcome would be (Wunderlich, Dayan, & Dolan, 2012).
Habitual processes have a signature limit: they persist in extinction following devaluation.
Step 4. Find ways to interfere with habitual processes so that they are influenced by the ethical factors identified in Step 2. Initial approach: target rewards.
One possibility would be to have vicarious rewards, perhaps especially for in-group members. Suppose observing you being rewarded could trigger in me some of the reward processes that would typically occur in me if it were me, not you, who was being rewarded. Then the strength of stimulus—action links in me would be influenced not only by which outcomes are rewarding for me but also which outcomes are rewarding for you. Approximating an in-group utilitarian decision-making process.
A second possibility is inspired by aversion to bitterness as a mechanism for avoiding poisons. Poisonous foods are often bitter, and a range of animals including sea anemones become averse to a food type after a single bitter encounter (Garcia & Hankins, 1975). Further, animals who encounter a greater proportion of poisonous foods in their normal diet (herbivores) show both higher sensitivity to bitterness (Li & Zhang, 2014)and a higher tolerance for it (Ji, 1994) .
In humans, unfairness can be detected early in the second year of life (Geraci & Surian, 2011; Surian, Ueno, Itakura, & Meristo, 2018) and, in adults at least, unfairness can also produce sensations of bitterness.[4]
If a limited but useful range of moral violations can produce bitter sensations, general-purpose learning mechanisms can produce aversion to actions that generate these moral violations.
Glossary
References
Endnotes
See Greene (2015, p. 203): ‘We developed this theory in response to a long-standing philosophical puzzle ... Why do people typically say “yes” to hitting the switch, but “no” to pushing?’ ↩︎
See also Trémolière & Bonnefon (2014) and Conway & Gawronski (2013) (who manipulated cognitive load). ↩︎
After this review was published, Nagel & Waldmann (2013) provided substantial evidence that distance may not be a factor influencing moral intuitions after all (the impression that it does was based on confounding distance with factors typically associated with distance such as group membership and efficacy of action). ↩︎
see Chapman et al. (2009), who establish that (1) responses to bitterness are marked by activation of the levator labii muscle ‘which raises the upper lip and wrinkles the nose’; (2) bitter responses are made not just to bitter tastes but also to ‘photographs of uncleanliness and contamination-related disgust stimuli, including feces, injuries, insects, etc.’; and (3) in a dictator game, ‘objective (facial motor) signs of disgust that were proportional to the degree of unfairness they experienced.’ ↩︎