Link Search Menu Expand Document

Minimal Virtues in Ethical Cognition

If the slides are not working, or you prefer them full screen, please try this link.

Notes

Greene et al’s Dual-Process Theory

Greene et al offer a dual-process theory of ethical cognition:

‘this theory associates controlled cognition with utilitarian (or consequentialist) moral judgment aimed at promoting the “greater good” (Mill, 1861/1998) while associating automatic emotional responses with competing deontological judgments that are naturally justified in terms of rights or duties (Kant, 1785/1959).’ (Greene, 2015, p. 203)

The theory was developed in part to explain otherwise apparently anomalous responses to moral dilemmas. In particular, people have substantially different attitudes to killing one person in order to save several others depending on whether the killing involves pressing a switch (as in the Switch dilemma) or whether it involves dropping someone through a trapdoor into the path of great danger (as in the Footbridge dilemma).[1]

What is the explanation Greene et al’s theory offers?

‘this pattern of judgment [Switch—yes; Footbridge—no] reflects the outputs of distinct and (in some cases) competing neural systems [...] The more “personal” harmful action in the footbridge case, pushing the man off the footbridge, triggers a relatively strong negative emotional response, whereas the relatively impersonal harmful action in the switch case does not.’ (Greene, 2015, p. 203—4)

Mixed Behavioural Evidence for This Theory

One prediction of the theory is that increasing time pressure should increase the influence of automatic emotional processes relative to the influence of controlled cognition, which in turn should make responses that are characteristically deontological more likely.

This prediction is supported by (Suter & Hertwig, 2011), among others.[2] But Bago & De Neys (2019) consider what happens when subjects first make a moral judgement under time pressure and extraneous cognitive load and then, just after, make another moral judgement (in answer to the same question) with no time pressure and no extraneous cognitive load. They report:

‘Our critical finding is that although there were some instances in which deliberate correction occurred, these were the exception rather than the rule. Across the studies, results consistently showed that in the vast majority of cases in which people opt for a [consequentialist] response after deliberation, the [consequentialist] response is already given in the initial phase’ (Bago & De Neys, 2019, p. 1794).

Rosas & Aguilar-Pardo (2020) find, conversely to what Greene et al’s theory predicts, that subjects are less likely to give characteristically deontological responses under extreme time pressure.

The converse finding of Rosas & Aguilar-Pardo (2020) is not theoretically unmotivated—there are also some theoretical reasons for holding that automatic emotional processes should support characteristically utilitarian responses (Kurzban, DeScioli, & Fein, 2012).

As there is a substantial body of neuropsychological evidence in favour of Greene et al’s theory (reviewed in Greene, 2014), its defenders may be little moved by the mixed behavioural evidence. But there is a reason, not decisive but substantial, to expect mixed evidence more generally ...

Methodological Challenge

The mixed pattern of evidence for and against Greene et al’s theory might be explained by their choice of vignettes using trolley cases as stimuli. Waldmann, Nagel, & Wiegmann (2012, p. 288) offers a brief summary of some factors which have been considered to influence responses including:

  • whether an agent is part of the danger (on the trolley) or a bystander;
  • whether an action involves forceful contact with a victim;
  • whether an action targets an object or the victim;
  • how far the agent is from the victim;[3] and
  • how the victim is described.

Other factors include whether there are irrelevant alternatives (Wiegmann, Horvath, & Meyer, 2020); and order of presentation (Schwitzgebel & Cushman, 2015).

They comment:

‘A brief summary of the research of the past years is that it has been shown that almost all these confounding factors influence judgments, along with a number of others [...] it seems hopeless to look for the one and only explanation of moral intuitions in dilemmas. The research suggests that various moral and nonmoral factors interact in the generation of moral judgments about dilemmas’ (Waldmann et al., 2012, pp. 288, 290).

For proponents of Greene et al’s view, this might be taken as encouragement. Yes, the evidence is a bit mixed. But perhaps what appears to be evidence falsifying predictions of the view will turn out to be merely a consequence of extraneous, nonmoral factors influencing judgements.

Alternatively, Waldmann et al.’s observation could be taken to suggest that few if any of the studies relying on dilemmas presented in vignette form provide reliable evidence about moral factors since they do not adequately control for extraneous, nonmoral factors. As an illustration, Gawronski, Armstrong, Conway, Friesdorf, & Hütter (2017) note that aversion to killing (which would be characteristically deontological) needs to be separated from a preference for inaction. When considering only aversion to killing, time pressure appears to result in characteristically deontological responses, which would support Greene et al’s theory (Conway & Gawronski, 2013). But when aversion to killing and a preference for inaction are considered together, Gawronski et al. (2017) found evidence only that time pressure increases preferences for inaction.

While the combination of mixed behavioural evidence and methodological challenges associated with using dilemmas presented in vignettes does not provide a case for rejecting Greene et al’s view, it does motivate considering fresh alternatives.

The Search for Minimal Virtues

I do not have a minimal model that would be useful for formulating conjectures about ethical cognition but I would like to share some ideas about where we might find one.

Step 1. Abandon the deontological/utilitarian idea with the aim, eventually, of finding a minimal model of the ethical.

Step 2. What would a minimal model be a model of? Haidt & Graham (2007) claim that there are five evolutionarily ancient, psychologically basic abilities linked to:

  • harm/care
  • fairness (including reciprocity)
  • in-group loyalty
  • respect for authority
  • purity, sanctity

Step 3. Which processes might implement a minimal model? One possibility is to consider the habitual processes which support selecting goals (compare Crockett, 2013 and Cushman, 2013).

Habitual processes simplify the problem of goal selection by representing the world as involving only stimulus—action links. They are characterised by Thorndyke’s Law of Effect:

‘The presenta­tion of an effective [rewarding] outcome following an action [...] rein­forces a connection between the stimuli present when the action is per­formed and the action itself so that subsequent presentations of these stimuli elicit the [...] action as a response’ (Dickinson, 1994, p. 48).

When the environment and an agent’s preferences are sufficiently stable, habitual processes can approximate the computation of expected utility without the computational costs involved in identifying who probably different action outcomes are and how desirable each outcome would be (Wunderlich, Dayan, & Dolan, 2012).

Habitual processes have a signature limit: they persist in extinction following devaluation.

Step 4. Find ways to interfere with habitual processes so that they are influenced by the ethical factors identified in Step 2. Initial approach: target rewards.

One possibility would be to have vicarious rewards, perhaps especially for in-group members. Suppose observing you being rewarded could trigger in me some of the reward processes that would typically occur in me if it were me, not you, who was being rewarded. Then the strength of stimulus—action links in me would be influenced not only by which outcomes are rewarding for me but also which outcomes are rewarding for you. Approximating an in-group utilitarian decision-making process.

A second possibility is inspired by aversion to bitterness as a mechanism for avoiding poisons. Poisonous foods are often bitter, and a range of animals including sea anemones become averse to a food type after a single bitter encounter (Garcia & Hankins, 1975). Further, animals who encounter a greater proportion of poisonous foods in their normal diet (herbivores) show both higher sensitivity to bitterness (Li & Zhang, 2014)and a higher tolerance for it (Ji, 1994) .

In humans, unfairness can be detected early in the second year of life (Geraci & Surian, 2011; Surian, Ueno, Itakura, & Meristo, 2018) and, in adults at least, unfairness can also produce sensations of bitterness.[4]

If a limited but useful range of moral violations can produce bitter sensations, general-purpose learning mechanisms can produce aversion to actions that generate these moral violations.

Glossary

characteristically deontological : According to Greene, a judgement is characteristically deontological if it is one in ‘favor of characteristically deontological conclusions (eg, “It’s wrong despite the benefits”)’ (Greene, 2007, p. 39). According to Gawronski et al. (2017, p. 365), ‘a given judgment cannot be categorized as deontological without confirming its property of being sensitive to moral norms.’
devaluation : To devalue some food (or video clip, or any other thing) is to reduce its value, for example by allowing the agent to satiete themselves on it or by causing them to associate it with an uncomfortable event such as an electric shock or mild illness.
dual-process theory : Any theory concerning abilities in a particular domain on which those abilities involve two or more processes which are distinct in this sense: the conditions which influence whether one mindreading process occurs differ from the conditions which influence whether another occurs.
extinction : In some experiments, there is a phase (usually following instrumental training and an intervention such as devaluation) during which the subject encounters the training scenario exactly as it was (same stimuli, same action possibilities) but the actions produce no revant outcomes. In this extinction phase, there is no reward (nor punishment). (It is called ‘extinction’ because in many cases not rewarding (or punishing) the actions will eventually extinguish the stimulus--action links.)
Footbridge : A dilemma; also known as Drop. A runaway trolley is about to run over and kill five people. You can hit a switch that will release the bottom of a footbridge and one person will fall onto the track. The trolley will hit this person, slow down, and not hit the five people further down the track. Is it okay to hit the switch?
habitual process : A process underpinning some instrumental actions which obeys Thorndyke’s Law of Effect: ‘The presenta­tion of an effective [=rewarding] outcome following an action [...] rein­forces a connection between the stimuli present when the action is per­formed and the action itself so that subsequent presentations of these stimuli elicit the [...] action as a response’ (Dickinson, 1994, p. 48). (Interesting complication which you can safely ignore: there is probably much more to say about under what conditions the stimulus–action connection is strengthened; e.g. Thrailkill, Trask, Vidal, Alcalá, & Bouton, 2018.)
signature limit : A signature limit of a system is a pattern of behaviour the system exhibits which is both defective given what the system is for and peculiar to that system. A signature limit of a model is a set of predictions derivable from the model which are incorrect, and which are not predictions of other models under consideration.
Switch : A dilemma; also known as Trolley. A runaway trolley is about to run over and kill five people. You can hit a switch that will divert the trolley onto a different set of tracks where it will kill only one. Is it okay to hit the switch?
trolley cases : Scenarios designed to elicit puzzling or informative patterns of judgement about how someone should act. Examples include Trolley, Transplant, and Drop. Their use was pioneered by Foot (1967) and Thomson (1976), who aimed to use them to understand ethical considerations around abortion and euthanasia.

References

Bago, B., & De Neys, W. (2019). The Intuitive Greater Good: Testing the Corrective Dual Process Model of Moral Cognition. Journal of Experimental Psychology: General, 148(10), 1782–1801. https://doi.org/10.1037/xge0000533
Carlson, R. W., Bigman, Y. E., Gray, K., Ferguson, M. J., & Crockett, M. J. (2022). How inferred motives shape moral judgements. Nature Reviews Psychology, 1(8), 468–478. https://doi.org/10.1038/s44159-022-00071-x
Chapman, H. A., Kim, D. A., Susskind, J. M., & Anderson, A. K. (2009). In Bad Taste: Evidence for the Oral Origins of Moral Disgust. Science, 323(5918), 1222–1226. https://doi.org/10.1126/science.1165565
Contreras-Huerta, L. S., Coll, M.-P., Bird, G., Yu, H., Prosser, A., Lockwood, P. L., … Apps, M. A. J. (2023). Neural representations of vicarious rewards are linked to interoception and prosocial behaviour. NeuroImage, 269, 119881. https://doi.org/10.1016/j.neuroimage.2023.119881
Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology, 104(2), 216–235. https://doi.org/10.1037/a0031021
Crockett, M. J. (2013). Models of morality. Trends in Cognitive Sciences, 17(8), 363–366. https://doi.org/10.1016/j.tics.2013.06.005
Cushman, F. (2013). Action, Outcome, and Value: A Dual-System Framework for Morality. Personality and Social Psychology Review, 17(3), 273–292. https://doi.org/10.1177/1088868313495594
Dickinson, A. (1994). Instrumental conditioning. In N. Mackintosh (Ed.), Animal learning and cognition. London: Academic Press.
Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford Review, 5, 5–15.
Garcia, J., & Hankins, WalterG. (1975). The Evolution of Bitter and the Acquisition of Toxiphobia. In Olfaction and Taste: 5th Symposium (pp. 39–45). Elsevier. https://doi.org/10.1016/B978-0-12-209750-8.50014-7
Gawronski, B., Armstrong, J., Conway, P., Friesdorf, R., & Hütter, M. (2017). Consequences, norms, and generalized inaction in moral dilemmas: The CNI model of moral decision-making. Journal of Personality and Social Psychology, 113(3), 343–376. https://doi.org/10.1037/pspa0000086
Gawronski, B., & Beer, J. S. (2017). What makes moral dilemma judgments “utilitarian” or “deontological”? Social Neuroscience, 12(6), 626–632. https://doi.org/10.1080/17470919.2016.1248787
Gawronski, B., Conway, P., Armstrong, J., Friesdorf, R., & Hütter, M. (2018). Effects of incidental emotions on moral dilemma judgments: An analysis using the CNI model. Emotion, 18(7), 989–1008. https://doi.org/10.1037/emo0000399
Geraci, A., & Surian, L. (2011). The developmental roots of fairness: Infants’ reactions to equal and unequal distributions of resources. Developmental Science, 14(5), 1012–1020. https://doi.org/10.1111/j.1467-7687.2011.01048.x
Greene, J. D. (2007). The Secret Joke of Kant’s Soul. In W. Sinnott-Armstrong (Ed.), Moral Psychology, Vol. 3 (pp. 35–79). MIT Press.
Greene, J. D. (2014). Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics. Ethics, 124(4), 695–726. https://doi.org/10.1086/675875
Greene, J. D. (2015). The cognitive neuroscience of moral judgment and decision making. In The moral brain: A multidisciplinary perspective (pp. 197–220). Cambridge, MA, US: MIT Press.
Haidt, J., & Graham, J. (2007). When Morality Opposes Justice: Conservatives Have Moral Intuitions that Liberals may not Recognize. Social Justice Research, 20(1), 98–116. https://doi.org/10.1007/s11211-007-0034-z
Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55–66. https://doi.org/10.1162/0011526042365555
Ji, G. (1994). Is the bitter rejection response always adaptive? Physiology & Behavior, 56(6). https://doi.org/10.1016/0031-9384(94)90369-7
Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptations for moral judgment. Evolution and Human Behavior, 33(4), 323–333.
Li, D., & Zhang, J. (2014). Diet Shapes the Evolution of the Vertebrate Bitter Taste Receptor Gene Repertoire. Molecular Biology and Evolution, 31(2), 303–309. https://doi.org/10.1093/molbev/mst219
Nagel, J., & Waldmann, M. R. (2013). Deconfounding distance effects in judgments of moral obligation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(1), 237.
Rosas, A., & Aguilar-Pardo, D. (2020). Extreme time-pressure reveals utilitarian intuitions in sacrificial dilemmas. Thinking & Reasoning, 26(4), 534–551. https://doi.org/10.1080/13546783.2019.1679665
Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137. https://doi.org/10.1016/j.cognition.2015.04.015
Surian, L., Ueno, M., Itakura, S., & Meristo, M. (2018). Do Infants Attribute Moral Traits? Fourteen-Month-Olds’ Expectations of Fairness Are Affected by Agents’ Antisocial Actions. Frontiers in Psychology, 9. Retrieved from https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01649
Suter, R. S., & Hertwig, R. (2011). Time and moral judgment. Cognition, 119(3), 454–458. https://doi.org/10.1016/j.cognition.2011.01.018
Thomson, J. J. (1976). Killing, Letting Die, and The Trolley Problem. The Monist, 59(2), 204–217. https://doi.org/10.5840/monist197659224
Thrailkill, E. A., Trask, S., Vidal, P., Alcalá, J. A., & Bouton, M. E. (2018). Stimulus control of actions and habits: A role for reinforcer predictability and attention in the development of habitual behavior. Journal of Experimental Psychology: Animal Learning and Cognition, 44, 370–384. https://doi.org/10.1037/xan0000188
Trémolière, B., & Bonnefon, J.-F. (2014). Efficient Kill the Cognitive Demands on Counterintuitive Moral Utilitarianism. Personality and Social Psychology Bulletin, 124(3), 379–384. https://doi.org/10.1177/0146167214530436
Waldmann, M. R., Nagel, J., & Wiegmann, A. (2012). Moral Judgment. In K. J. Holyoak & R. G. Morrison (Eds.), The oxford handbook of thinking and reasoning (pp. 274–299). Oxford: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199734689.013.0019
Wiegmann, A., Horvath, J., & Meyer, K. (2020). Intuitive expertise and irrelevant options. Oxford Studies in Experimental Philosophy, 3, 275–310.
Wunderlich, K., Dayan, P., & Dolan, R. J. (2012). Mapping value based planning and extensively trained choice in the human brain. Nature Neuroscience, 15(5), 786–791. https://doi.org/10.1038/nn.3068

Endnotes

  1. See Greene (2015, p. 203): ‘We developed this theory in response to a long-standing philosophical puzzle ... Why do people typically say “yes” to hitting the switch, but “no” to pushing?’ ↩︎

  2. See also Trémolière & Bonnefon (2014) and Conway & Gawronski (2013) (who manipulated cognitive load). ↩︎

  3. After this review was published, Nagel & Waldmann (2013) provided substantial evidence that distance may not be a factor influencing moral intuitions after all (the impression that it does was based on confounding distance with factors typically associated with distance such as group membership and efficacy of action). ↩︎

  4. see Chapman et al. (2009), who establish that (1) responses to bitterness are marked by activation of the levator labii muscle ‘which raises the upper lip and wrinkles the nose’; (2) bitter responses are made not just to bitter tastes but also to ‘photographs of uncleanliness and contamination-related disgust stimuli, including feces, injuries, insects, etc.’; and (3) in a dictator game, ‘objective (facial motor) signs of disgust that were proportional to the degree of unfairness they experienced.’ ↩︎