Link Search Menu Expand Document

Moral Knowledge Does Not Exist

If the slides are not working, or you prefer them full screen, please try this link.

Notes

Consider the following argument:

  1. Ethical intuitions are necessary for ethical knowledge.
  2. Ethical intuitions are too unreliable to be a source of knowledge; therefore:
  3. Ethical knowledge is not possible.

We will assume that premise 1 is true and that the argument is valid. But is premise 2 true?

This section will consider two attempts to defend premise 2

Attempt 1: Ethical intuitions vary between cultures

Graham, Haidt, & Nosek (2009) show that people who are socially conservative tend to put more emphasis on ethical concerns linked to purity than people who are socially liberal.[1] They are more likely to consider that eating vomit is an ethical violation (rather than simply imprudent or merely harmful).

The difference in ethical intutions about purity matters. It predicts attitudes about ‘gay marriage, euthanasia, abortion, and pornography [...], stem cell research, environmental attitudes, [...] social distancing in real-world social networks’ and more (Graham et al., 2019).

Could cultural variation provide a defence of the claim that intuitions are too unreliable to be a source of knowledge? Consider:

  1. Between cultures there are inconsistent ethical intuitions; therefore
  2. Ethical intuitions are too unreliable to be a source of knowledge.

Further reading: McGrath (2008) argues that ethical intuitions are controversial and therefore cannot be a basis for knowledge.

Attempt 2: Ethical intuitions are influenced by extraneous factors

In evaluating moral scenarios involving trolley problems (such as Switch and Drop), what factors might sway people’s judgements?

Waldmann, Nagel, & Wiegmann (2012, p. 288) offers a brief summary of some factors which have been considered to influence judgements about trolley problems including:

  • whether an agent is part of the danger (on the trolley) or a bystander;
  • whether an action involves forceful contact with a victim;
  • whether an action targets an object or the victim;
  • how far the agent is from the victim;[2] and
  • how the victim is described.

They comment:

‘A brief summary of the research of the past years is that it has been shown that almost all these confounding factors influence judgments, along with a number of others [...] it seems hopeless to look for the one and only explanation of moral intuitions in dilemmas. The research suggests that various moral and nonmoral factors interact in the generation of moral judgments about dilemmas’ (Waldmann et al., 2012, pp. 288, 290).

Many extraneous factors also influence trained philosophers’ intuitions ...

Schwitzgebel & Cushman (2015) show that philosophers (and non-philosophers) are subject to order-of-presentation effects (they make different judgements depending on which order trolley problems are presented).

Wiegmann, Horvath, & Meyer (2020) show that philosophers are subject to irrelevant additional options: like non-philosophers, philosophers will more readily endorsing killing one person to save nine when given five alternatives than when given six alternatives. (These authors also demonstrate order-of-presentation effects.)

Wiegmann & Horvath (2021) show that philosophers are subject to the ‘Asian disease’ framing used in a famous earlier study (Tversky & Kahneman, 1981).[3] (They also find an indication that philosophers, although susceptible to other framing effects, may be less susceptible than non-philosophers to four other framing effects, including whether an outcome is presented as a loss or a gain.)

Why care about the influence of extraneous factors on ethical intuitions? According to Rini (2013, p. 265):

‘Our moral judgments are apparently sensitive to idiosyncratic factors, which cannot plausibly appear as the basis of an interpersonal normative standard. [...] we are not in a position to introspectively isolate and abstract away from these factors.’[4]

Does the influence of extraneous factors on ethical intuitions reveal that ethical intuitions are too unreliable to be a source of knowledge? Consider:

  1. Ethical intuitions are influenced by extraneous factors; therefore
  2. Ethical intuitions are too unreliable to be a source of knowledge.

Further reading: Rini (2013) considers an argument along these lines.

Conclusion so far

Are ethical intuitions are too unreliable to be a source of knowledge? Both attempts to defend this claim appear successful.

Glossary

Asian disease : A disease will kill 600 people for sure without an intervention. You are a decision maker tasked with choosing between two intervensions. Your choice can be framed in two ways. Frame 1: Either save 200 people for sure, or else take a one in three chance that everyone will be saved with a two in three chance that no one will be saved. Frame 2: Either allow 400 people to die for sure, or else take a one in three chance that nobody will die and a two in three chance that everyone will die. (Tversky & Kahneman, 1981)
David : ‘David is a great transplant surgeon. Five of his patients need new parts—one needs a heart, the others need, respectively, liver, stomach, spleen, and spinal cord—but all are of the same, relatively rare, blood-type. By chance, David learns of a healthy specimen with that very blood-type. David can take the healthy specimen's parts, killing him, and install them in his patients, saving them. Or he can refrain from taking the healthy specimen's parts, letting his patients die’ (Thomson, 1976, p. 206).
Drop : A dilemma; also known as Footbridge. A runaway trolley is about to run over and kill five people. You can hit a switch that will release the bottom of a footbridge and one person will fall onto the track. The trolley will hit this person, slow down, and not hit the five people further down the track. Is it okay to hit the switch?
Edward : ‘Edward is the driver of a trolley, whose brakes have just failed. On the track ahead of him are five people; the banks are so steep that they will not be able to get off the track in time. The track has a spur leading off to the right, and Edward can turn the trolley onto it. Unfortunately there is one person on the right-hand track. Edward can turn the trolley, killing the one; or he can refrain from turning the trolley, killing the five’ (Thomson, 1976, p. 206).
Switch : A dilemma; also known as Trolley. A runaway trolley is about to run over and kill five people. You can hit a switch that will divert the trolley onto a different set of tracks where it will kill only one. Is it okay to hit the switch?
trolley problem : ‘Why is it that Edward may turn that trolley to save his five, but David may not cut up his healthy specimen to save his five?’ (Thomson, 1976, p. 206).

References

Atari, M., Haidt, J., Graham, J., Koleva, S., Stevens, S. T., & Dehghani, M. (2023). Morality beyond the WEIRD: How the nomological network of morality varies across cultures. Journal of Personality and Social Psychology, 125(5), 1157–1188. https://doi.org/10.1037/pspp0000470
Bengson, J., Cuneo, T., & Shafer-Landau, R. (2020). Trusting Moral Intuitions. Noûs, 54(4), 956–984. https://doi.org/10.1111/nous.12291
Chater, N. (2018). The Mind is Flat: The Illusion of Mental Depth and The Improvised Mind. Penguin UK.
Graham, J., Haidt, J., Motyl, M., Meindl, P., Iskiwitch, C., & Mooijman, M. (2019). Moral Foundations Theory: On the advantages of moral pluralism over moral monism. In K. Gray & J. Graham (Eds.), Atlas of Moral Psychology. New York: Guilford Publications.
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96(5), 1029–1046. https://doi.org/10.1037/a0015141
Hrdy, S. B. (1979). Infanticide among animals: A review, classification, and examination of the implications for the reproductive strategies of females. Ethology and Sociobiology, 1(1), 13–40. https://doi.org/10.1016/0162-3095(79)90004-9
Hrdy, S. B. (2011). Mothers and others: The evolutionary origins of mutual understanding. Cambridge, Massachusetts: Belknap Press.
Kahneman, D. (2013). Thinking, fast and slow. New York: Farrar, Straus; Giroux.
McDonald, K., Graves, R., Yin, S., Weese, T., & Sinnott-Armstrong, W. (2021). Valence framing effects on moral judgments: A meta-analysis. Cognition, 212, 104703. https://doi.org/10.1016/j.cognition.2021.104703
McGrath, S. (2008). Moral disagreement and moral expertise. Oxford Studies in Metaethics, 3, 87–107.
Nagel, J., & Waldmann, M. R. (2013). Deconfounding distance effects in judgments of moral obligation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(1), 237.
Rini, R. A. (2013). Making psychology normatively significant. The Journal of Ethics, 17(3), 257–274.
Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137. https://doi.org/10.1016/j.cognition.2015.04.015
Sinnott-Armstrong, W. (2008). Reply to tolhurst and shafer-landau. In W. Sinnott-Armstrong (Ed.), Moral psychology: Intuition and diversity. The cognitive science of morality (Vol. 2, pp. 97–105). Cambridge: Cambridge University Press.
Thomson, J. J. (1976). Killing, Letting Die, and The Trolley Problem. The Monist, 59(2), 204–217. https://doi.org/10.5840/monist197659224
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–458. https://doi.org/10.1126/science.7455683
Waldmann, M. R., Nagel, J., & Wiegmann, A. (2012). Moral Judgment. In K. J. Holyoak & R. G. Morrison (Eds.), The oxford handbook of thinking and reasoning (pp. 274–299). Oxford: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199734689.013.0019
Wiegmann, A., & Horvath, J. (2021). Intuitive Expertise in Moral Judgements. Australasian Journal of Philosophy, 100(2), 342–359. https://doi.org/10.1080/00048402.2021.1890162
Wiegmann, A., Horvath, J., & Meyer, K. (2020). Intuitive expertise and irrelevant options. Oxford Studies in Experimental Philosophy, 3, 275–310.

Endnotes

  1. This study based on a questionnaire called MFQ which does not have the right properties to justify this conclusion. However new research by Atari et al. (2023) based on an improved questionnaire does support the conclusion. ↩︎

  2. After this review was published, Nagel & Waldmann (2013) provided substantial evidence that distance may not be a factor influencing moral intuitions after all (the impression that it does was based on confounding distance with factors typically associated with distance such as group membership and efficacy of action). ↩︎

  3. This is one example of a framing effect involving differences in valence. McDonald, Graves, Yin, Weese, & Sinnott-Armstrong (2021) provide a useful meta-analysis of the research on such framing effects, allowing us to be confident that valence framing has a small effect on moral judgements. (They also offer a helpful definition: ‘Valence framing effects occur when participants make different choices or judgments depending on whether the options are described in terms of their positive outcomes (e.g. lives saved) or their negative outcomes (e.g. lives lost).’) ↩︎

  4. Compare Kahneman (2013): ‘there is no underlying [intuition] that is masked or distorted by the frame. [...] our moral intuitions are about descriptions, not about substance’ (Kahneman, 2013). This comment is linked specifically to Schelling’s child exemptions in the tax code example and the problems of the Asian disease. But the same could be true in other cases too (Chater, 2018).

    Also relevant in Sinnott-Armstrong (2008, p. 99)’s argument:

    ‘Evidence of framing effects makes it reasonable for informed moral believers to assign a large probability of error to moral intuitions in general and then to apply that probability to a particular moral intuition until they have some special reason to believe that the particular moral intuition is in a different class with a smaller probability of error.’

    ↩︎