Moral Knowledge Does Not Exist
If the slides are not working, or you prefer them full screen, please try this link.
Notes
Consider the following argument:
- Ethical intuitions are necessary for ethical knowledge.
- Ethical intuitions are too unreliable to be a source of knowledge; therefore:
- Ethical knowledge is not possible.
We will assume that premise 1 is true and that the argument is valid. But is premise 2 true?
This section will consider two attempts to defend premise 2
Attempt 1: Ethical intuitions vary between cultures
Graham, Haidt, & Nosek (2009) show that people who are socially conservative tend to put more emphasis on ethical concerns linked to purity than people who are socially liberal.[1] They are more likely to consider that eating vomit is an ethical violation (rather than simply imprudent or merely harmful).
The difference in ethical intutions about purity matters. It predicts attitudes about ‘gay marriage, euthanasia, abortion, and pornography [...], stem cell research, environmental attitudes, [...] social distancing in real-world social networks’ and more (Graham et al., 2019).
Could cultural variation provide a defence of the claim that intuitions are too unreliable to be a source of knowledge? Consider:
- Between cultures there are inconsistent ethical intuitions; therefore
- Ethical intuitions are too unreliable to be a source of knowledge.
Further reading: McGrath (2008) argues that ethical intuitions are controversial and therefore cannot be a basis for knowledge.
Attempt 2: Ethical intuitions are influenced by extraneous factors
In evaluating moral scenarios involving trolley problems (such as Switch and Drop), what factors might sway people’s judgements?
Waldmann, Nagel, & Wiegmann (2012, p. 288) offers a brief summary of some factors which have been considered to influence judgements about trolley problems including:
- whether an agent is part of the danger (on the trolley) or a bystander;
- whether an action involves forceful contact with a victim;
- whether an action targets an object or the victim;
- how far the agent is from the victim;[2] and
- how the victim is described.
They comment:
‘A brief summary of the research of the past years is that it has been shown that almost all these confounding factors influence judgments, along with a number of others [...] it seems hopeless to look for the one and only explanation of moral intuitions in dilemmas. The research suggests that various moral and nonmoral factors interact in the generation of moral judgments about dilemmas’ (Waldmann et al., 2012, pp. 288, 290).
Many extraneous factors also influence trained philosophers’ intuitions ...
Schwitzgebel & Cushman (2015) show that philosophers (and non-philosophers) are subject to order-of-presentation effects (they make different judgements depending on which order trolley problems are presented).
Wiegmann, Horvath, & Meyer (2020) show that philosophers are subject to irrelevant additional options: like non-philosophers, philosophers will more readily endorsing killing one person to save nine when given five alternatives than when given six alternatives. (These authors also demonstrate order-of-presentation effects.)
Wiegmann & Horvath (2021) show that philosophers are subject to the ‘Asian disease’ framing used in a famous earlier study (Tversky & Kahneman, 1981).[3] (They also find an indication that philosophers, although susceptible to other framing effects, may be less susceptible than non-philosophers to four other framing effects, including whether an outcome is presented as a loss or a gain.)
Why care about the influence of extraneous factors on ethical intuitions? According to Rini (2013, p. 265):
‘Our moral judgments are apparently sensitive to idiosyncratic factors, which cannot plausibly appear as the basis of an interpersonal normative standard. [...] we are not in a position to introspectively isolate and abstract away from these factors.’[4]
Does the influence of extraneous factors on ethical intuitions reveal that ethical intuitions are too unreliable to be a source of knowledge? Consider:
- Ethical intuitions are influenced by extraneous factors; therefore
- Ethical intuitions are too unreliable to be a source of knowledge.
Further reading: Rini (2013) considers an argument along these lines.
Conclusion so far
Are ethical intuitions are too unreliable to be a source of knowledge? Both attempts to defend this claim appear successful.
Glossary
References
Endnotes
This study based on a questionnaire called MFQ which does not have the right properties to justify this conclusion. However new research by Atari et al. (2023) based on an improved questionnaire does support the conclusion. ↩︎
After this review was published, Nagel & Waldmann (2013) provided substantial evidence that distance may not be a factor influencing moral intuitions after all (the impression that it does was based on confounding distance with factors typically associated with distance such as group membership and efficacy of action). ↩︎
This is one example of a framing effect involving differences in valence. McDonald, Graves, Yin, Weese, & Sinnott-Armstrong (2021) provide a useful meta-analysis of the research on such framing effects, allowing us to be confident that valence framing has a small effect on moral judgements. (They also offer a helpful definition: ‘Valence framing effects occur when participants make different choices or judgments depending on whether the options are described in terms of their positive outcomes (e.g. lives saved) or their negative outcomes (e.g. lives lost).’) ↩︎
Compare Kahneman (2013): ‘there is no underlying [intuition] that is masked or distorted by the frame. [...] our moral intuitions are about descriptions, not about substance’ (Kahneman, 2013). This comment is linked specifically to Schelling’s child exemptions in the tax code example and the problems of the Asian disease. But the same could be true in other cases too (Chater, 2018).
Also relevant in Sinnott-Armstrong (2008, p. 99)’s argument:
‘Evidence of framing effects makes it reasonable for informed moral believers to assign a large probability of error to moral intuitions in general and then to apply that probability to a particular moral intuition until they have some special reason to believe that the particular moral intuition is in a different class with a smaller probability of error.’