But One We Can Work Around
If the slides are not working, or you prefer them full screen, please try this link.
Notes
We can divide the problem of identifying models of minds and actions into two: first, a characterisation of mental states generally; and, second, a characterisation of what distinguishes different attitudes link knowledge, intention, surpise and the rest.
It turns out that the two parts of the problem are to a significant degree independent of each other.
Part I: Mental States (Perner’s Strategy)
Perner starts with a theory of mental states.
‘representation involves a representational medium that stands in a representing relation to its representational content.’ (Perner, 1991, p. 40)
Mental states are understood as a relation to a thing. But there are two distinct ways of understanding mental states corresponding to two different kinds of thing they can be understood as relations to.
Option 1: The thing can be a situation, that is an aspect of the world.
Option 2: The thing can be a representation of a situation.
Option 1 is simpler but also more limited. For on Option 1, there is no way to understand the possibility of misrepresentation, that is, a mental state which is supposed ‘to describe the real situation (referent) and yet (mis)describes it as a quite different situation (sense)’ (Perner, 1991, p. 92).
So why bother with Option 2 at all? Actually Perner’s view is that in everyday mindreading we rarely do bother with Option 2.[1] But there are some limits on Option 1. In particular, understanding actions based on false beliefs requires Option 2.[2]
Perner’s Paradox
The following four claims cannot all be true:
Ancient philosophers were deeply puzzled about the possibility of speaking and thinking falsely.
Ancient philosophers could have passed false belief tasks.
To pass a false belief tasks is to understand a case of misrepresentation.
‘Explicit understanding of representation (mentally modeling the representational relationship = metarepresentation) [...] is necessary for understanding cases of misrepresentation.’
This motivates considering alternatives to Perner’s theory. In particular, what would happen if we rejected either (3) or (4)?
Davidson’s Measurement-Theoretic Alternative
According to Davidson:
‘Beliefs are true or false, but they represent nothing.’ (Davidson, 2001, p. 46)[3]
On Davidson’s view, the sentences (or, better, utterances) we use to distinguish between different things someone might intend, know or believe function a bit like the numbers we use to distinguish temperatures.
Just as numbers play no physical role, so the sentences play no psychological role. Nor do either the numbers or the sentences have counterparts that play a psychological role.
This is a measurement-theoretic, non-representational theory of the nature of mental states. (Matthews, 1994; R. J. Matthews, 2007 develops the idea in detail).
How Do Mindreaders Model Mental States?
In philosophy, the focus is sometimes on how mental states actually are. That is not our concern.
We are concerned with how mental states are modeled in mindreading. Perner’s (Fodor-esque) proposal provides one option, Davidson’s proposal provides an alternative option. Each option can be used to generate a hypothesis about a particular mindreading ability. Because the hypotheses generate different predictions, they are testable.
It is possible that both models are used by mindreaders at different times. Perhaps different mindreading abilities involve different models.
Part II: Attitudes
Decision theory provides a way of characterising instrumental action as a consequence of two attitudes, subjective probabilities and preferences Jeffrey (1983).
We also know from the history of decision theory that it is possible to construct models that are less sophisticated. For example, there is a model which uses objective rather than subjective preferences (that is, there is just one preference ranking that applies in all cases regardless of which subject is the agent of the action).
It is possible to map some of the tasks from the Theory of Mind Scale (Wellman & Liu, 2004) on to these more and less sophisticated models. This enables us to use decision-theoretic notions to characterise which models are involved in mindreading.
The advantage is that we do have a shared understanding of subjective probabilities and preferences. After all, these are characterised by the theory. The limit is that few aspects of mindreading can be characterised in this way. These limits are quickly reached even within the Theory of Mind Scale (Wellman & Liu, 2004): there is no way to capture what ‘Knowledge-Ignorance’ is measuring, for instance.
Other features that we would like a theory of mindreading to incorporate are also missing from decision theory. For example, we would like to know to what extent mindreaders are sensitive to the distinction between strength of justification and strength of confidence. Or how mindreaders model situations involving temporal constraints among actions, as when future action possibilities depend on how an agent acts now.
How could we overcome this limit? Useful formal models are probably too much to hope for. Attempts to model notions of knowledge that are relevant to predicting or explaining action face formidable problems (see, for example, (Stalnaker, 1999, p. Chapters 13--14) on the problem of logical omniscience).
Instead we can characterise aspects of mindreading by identifying limits of the decision theoretic model. In the talk, this is illustrated by situations in which adopting shorter or longer temporal intervals in framing the available actions influences which action will be performed (or which action we would predict if deriving predictions using a decision-theoretic model of minds and actions). This limit of decision theory corresponds to one aspect of mindreading competence that sometimes is associated with the word ‘intention’ (for example, by Bratman, 1987).
Conclusion
It is possible to characterise even sophisticated forms of mindreading without assuming what we do not have, namely a shared understanding of notions like knowledge, intention, surprise, anger and the rest.
As researchers we do not need a shared understanding of these notions. There are better alternatives to casting theories about mindreading in terms like ‘knowledge’, ‘intention’ or ‘surprise’.
No research succeeds by unreflectively using the language of the targets of explanation in characterising physical cognition, colour cognition, or any other cognitive domain. Except mindreading. But that is something that we could change.
References
Endnotes
See Perner (1991, p. 120): ‘our common sense is capable of taking a representational view of the mind but that, unless really necessary, it tries to get by without it.’ ↩︎
Perner (1991, p. 178): ‘with the ability to interpret certain thinking activities as mental representation the child gains new insight into aspects of mental functioning that are nearly impossible to comprehend without a representational theory. One such case is mistaken action, that is, action based on a misconception of the world or false belief.’ ↩︎
See also Davidson (2001, p. 184): ‘we ought also to question the popular assumption that sentences, or their spoken tokens, or sentence-like entities, or configurations in our brains can properly be called 'representations', since there is nothing for them to represent.’ ↩︎