Link Search Menu Expand Document

But One We Can Work Around

If the slides are not working, or you prefer them full screen, please try this link.


We can divide the problem of identifying models of minds and actions into two: first, a characterisation of mental states generally; and, second, a characterisation of what distinguishes different attitudes link knowledge, intention, surpise and the rest.

It turns out that the two parts of the problem are to a significant degree independent of each other.

Part I: Mental States (Perner’s Strategy)

Perner starts with a theory of mental states.

‘representation involves a representational medium that stands in a representing relation to its representational content.’ (Perner, 1991, p. 40)

Mental states are understood as a relation to a thing. But there are two distinct ways of understanding mental states corresponding to two different kinds of thing they can be understood as relations to.

Option 1: The thing can be a situation, that is an aspect of the world.

Option 2: The thing can be a representation of a situation.

Option 1 is simpler but also more limited. For on Option 1, there is no way to understand the possibility of misrepresentation, that is, a mental state which is supposed ‘to describe the real situation (referent) and yet (mis)describes it as a quite different situation (sense)’ (Perner, 1991, p. 92).

So why bother with Option 2 at all? Actually Perner’s view is that in everyday mindreading we rarely do bother with Option 2.[1] But there are some limits on Option 1. In particular, understanding actions based on false beliefs requires Option 2.[2]

Perner’s Paradox

The following four claims cannot all be true:

  1. Ancient philosophers were deeply puzzled about the possibility of speaking and thinking falsely.

  2. Ancient philosophers could have passed false belief tasks.

  3. To pass a false belief tasks is to understand a case of misrepresentation.

  4. ‘Explicit understanding of representation (mentally modeling the representational relationship = metarepresentation) [...] is necessary for understanding cases of misrepresentation.’

This motivates considering alternatives to Perner’s theory. In particular, what would happen if we rejected either (3) or (4)?

Davidson’s Measurement-Theoretic Alternative

According to Davidson:

‘Beliefs are true or false, but they represent nothing.’ (Davidson, 2001, p. 46)[3]

On Davidson’s view, the sentences (or, better, utterances) we use to distinguish between different things someone might intend, know or believe function a bit like the numbers we use to distinguish temperatures.

Just as numbers play no physical role, so the sentences play no psychological role. Nor do either the numbers or the sentences have counterparts that play a psychological role.

This is a measurement-theoretic, non-representational theory of the nature of mental states. (Matthews, 1994; R. J. Matthews, 2007 develops the idea in detail).

How Do Mindreaders Model Mental States?

In philosophy, the focus is sometimes on how mental states actually are. That is not our concern.

We are concerned with how mental states are modeled in mindreading. Perner’s (Fodor-esque) proposal provides one option, Davidson’s proposal provides an alternative option. Each option can be used to generate a hypothesis about a particular mindreading ability. Because the hypotheses generate different predictions, they are testable.

It is possible that both models are used by mindreaders at different times. Perhaps different mindreading abilities involve different models.

Part II: Attitudes

Decision theory provides a way of characterising instrumental action as a consequence of two attitudes, subjective probabilities and preferences Jeffrey (1983).

We also know from the history of decision theory that it is possible to construct models that are less sophisticated. For example, there is a model which uses objective rather than subjective preferences (that is, there is just one preference ranking that applies in all cases regardless of which subject is the agent of the action).

It is possible to map some of the tasks from the Theory of Mind Scale (Wellman & Liu, 2004) on to these more and less sophisticated models. This enables us to use decision-theoretic notions to characterise which models are involved in mindreading.

The advantage is that we do have a shared understanding of subjective probabilities and preferences. After all, these are characterised by the theory. The limit is that few aspects of mindreading can be characterised in this way. These limits are quickly reached even within the Theory of Mind Scale (Wellman & Liu, 2004): there is no way to capture what ‘Knowledge-Ignorance’ is measuring, for instance.

Other features that we would like a theory of mindreading to incorporate are also missing from decision theory. For example, we would like to know to what extent mindreaders are sensitive to the distinction between strength of justification and strength of confidence. Or how mindreaders model situations involving temporal constraints among actions, as when future action possibilities depend on how an agent acts now.

How could we overcome this limit? Useful formal models are probably too much to hope for. Attempts to model notions of knowledge that are relevant to predicting or explaining action face formidable problems (see, for example, (Stalnaker, 1999, p. Chapters 13--14) on the problem of logical omniscience).

Instead we can characterise aspects of mindreading by identifying limits of the decision theoretic model. In the talk, this is illustrated by situations in which adopting shorter or longer temporal intervals in framing the available actions influences which action will be performed (or which action we would predict if deriving predictions using a decision-theoretic model of minds and actions). This limit of decision theory corresponds to one aspect of mindreading competence that sometimes is associated with the word ‘intention’ (for example, by Bratman, 1987).


It is possible to characterise even sophisticated forms of mindreading without assuming what we do not have, namely a shared understanding of notions like knowledge, intention, surprise, anger and the rest.

As researchers we do not need a shared understanding of these notions. There are better alternatives to casting theories about mindreading in terms like ‘knowledge’, ‘intention’ or ‘surprise’.

No research succeeds by unreflectively using the language of the targets of explanation in characterising physical cognition, colour cognition, or any other cognitive domain. Except mindreading. But that is something that we could change.


Bandura, A. (2002). Selective Moral Disengagement in the Exercise of Moral Agency. Journal of Moral Education, 31(2), 101–119.
Beaudoin, C., Leblanc, É., Gagner, C., & Beauchamp, M. H. (2020). Systematic Review and Inventory of Theory of Mind Measures for Young Children. Frontiers in Psychology, 10.
Bratman, M. E. (1987). Intentions, plans, and practical reasoning. Cambridge, MA: Harvard University Press.
Davidson, D. (2001). Subjective, intersubjective, objective. Oxford: Clarendon Press.
Doherty, M. J., & Perner, J. (1998). Metalinguistic awareness and theory of mind: Just two words for the same thing? Cognitive Development, 13, 279–305.
Gendler, T. S. (2008). Alief in Action (and Reaction). Mind & Language, 23(5), 552–585.
Geurts, B. (2021). First saying, then believing: The pragmatic roots of folk psychology. Mind & Language, 36(4), 515–532.
Jeffrey, R. C. (1983). The logic of decision, second edition. Chicago: University of Chicago Press.
Leekam, Susan., Perner, J., Healey, L., & Sewell, C. (2008). False signs and the non-specificity of theory of mind: Evidence that preschoolers have general difficulties in understanding representations. British Journal of Developmental Psychology, 26(4), 485–497.
Matthews, R. J. (2007). The measure of mind: Propositional attitudes and their attribution. Oxford: Oxford University Press.
Matthews, Robert J. (1994). The measure of mind. Mind, 103(410), 131–146. Retrieved from
Okasha, S. (2011). Optimal Choice in the Face of Risk: Decision Theory Meets Evolution. Philosophy of Science, 78(1), 83–104.
Perner, J. (1991). Understanding the representational mind. Cambridge, Mass.: MIT press.
Perner, J., & Leekam, S. (2008). The Curious Incident of the Photo that was Accused of Being False: Issues of Domain Specificity in Development, Autism, and Brain Imaging. Quarterly Journal of Experimental Psychology, 61(1), 76–89.
Priewasser, B., Roessler, J., & Perner, J. (2013). Competition as rational action: Why young children cannot appreciate competitive games. Journal of Experimental Child Psychology, 116(2).
Rakoczy, H., Warneken, F., & Tomasello, M. (2007). “This way!”, “No! That way!”—3-year olds know that two people can have mutually incompatible desires. Cognitive Development, 22(1), 47–68.
Sabbagh, M. (2006). Executive functioning and preschoolers’ understanding of false beliefs, false photographs, and false signs. Child Development, 77(4), 1034–1049.
Stalnaker, R. (1999). Context and content: Essays on intentionality in speech and thought. Oxford: Oxford University Press.
Wellman, H., & Liu, D. (2004). Scaling of theory-of-mind tasks. Child Development, 75(2), 523–541.


  1. See Perner (1991, p. 120): ‘our common sense is capable of taking a representational view of the mind but that, unless really necessary, it tries to get by without it.’ ↩︎

  2. Perner (1991, p. 178): ‘with the ability to interpret certain thinking activities as mental representation the child gains new insight into aspects of mental functioning that are nearly impossible to comprehend without a representational theory. One such case is mistaken action, that is, action based on a misconception of the world or false belief.’ ↩︎

  3. See also Davidson (2001, p. 184): ‘we ought also to question the popular assumption that sentences, or their spoken tokens, or sentence-like entities, or configurations in our brains can properly be called 'representations', since there is nothing for them to represent.’ ↩︎