We Lack a Shared Understanding
If the slides are not working, or you prefer them full screen, please try this link.
The overall question for this talk is,
I aim, first, to show that this question is a problem. That is the aim of this section. (The following sections are about why it matters and how to work around the problem.)
Here is a partial answer that I think almost all researchers would agree on (tho it would further my aims were there substantial disagreement):
They are models which involve intentional actions and mental states like belief, knowledge, desire, intention, anger and joy.
Of course, this is only a partial answer. Accepting it means that we need to say, further, what these states are. So we should ask,
What anchors our understanding, as researchers, of intentional action, belief, knowledge and the rest?
Here I think there are three main options, none adequate. One is to invoke our own everyday expertise as mindreaders. Another is to involve philosophers’ attempts to characterise these mental states. And the third is to rely on attempts to operationalize mindreading.
In this section, I am going to explore these options with the aim of showing that none provides the basis for a shared understanding of what we’re talking about when, as researchers, we are talking about knowledge, desire, intention, anger, joy and the rest.
In fact we lack any such shared understanding.
Option 1: The Researcher’s Personal Expertise
As well as being researchers, you and I also live ordinary lives and in these ordinary lives we have gained much expertise as mindreaders. Could this expertise be what anchors our understanding, as researchers, of belief, knowledge and the rest?
This question almost answers itself. The problem is not simply that our expertise may differ in important ways, perhaps because we are at different points on the autistic spectrum or perhaps because of cultural differences between us (see, for example, Dixson, Komugabe-Dixson, Dixson, & Low, 2018). This is a problem, of course. But there is a deeper problem.
This everyday expertise we both have does not enable us to know what terms like ‘knowledge’ and ‘belief’ pick out. These words may not pick any one thing out—or there may be nothing at all that they pick out (compare Fiske, 2020 on emotion: this would be an instance of what he calls the lexical fallacy).
It’s possible to be blind to this problem because of a temptation to suppose that the workings of your own mind and the reasons for your own actions are somehow transparent to you.
Myths about Folk Psychology
Consider Lewis (1972). He postulates that there are a set of platitudes concerning mental states which are common knowledge among us all. He also claims that if we assembled these platitudes, we could use them to define mental state terms like ‘intention’ and ‘knowledge’.
If this were true it would mean that we can, after all, rely on our everyday expertise as mindreaders to anchor our understanding, as researchers, of knowledge, intention and the rest. But is it true?
To illustrate how his view works, Lewis imagines that some important platitudes have this form:
‘When someone is in so-and-so combination of mental states and receives sensory stimuli of so-and-so kind, he tends with so-and-so probability to be caused thereby to go into so-and-so mental states and produce so-and-so motor responses.’ (Lewis, 1972, p. 256)
But what are these platitudes that are supposed to be common knowledge? Heider (1958, p. 12) offered what is probably still, more than half a century later, the most sustained, carefully developed attempt to ‘make explicit the system of concepts that underlies interpersonal behavior’. There isn’t much in Heider’s work that looks useful for defining ‘intention’ or ‘knowledge’.
It is also striking that not very much of Heider’s construction could plausibly be regarded as common knowledge among ordinary mindreaders. Heider relies on a mix of informal observation, imagination, guesswork as well as philosophers’ ideas (Ryle and Satre, for example), my guess is that we should regard the principles he identifies not as articulating an understanding that we all share but rather as an imaginative take on possible strategies for everyday mindreading. In fact, Heider’s approach is not that different from philosophers like Bratman or Alvarez.
But if Lewis were right about common knowledge of platitudes anchoring mental state terms, either Heider’s work should have turned out very differently or else there should be a lot less disagreement among the philosophers. This is why I think Lewis must be wrong.
We might be able to use theories to specify models that help us characterise the expertise of ordinary mindreaders. But we are not in a position to identify those theories simply by virtue of posessing such expertise ourselves.
Comparison with Naive Physics
You can see that relying on each researcher’s individual everyday expertise would be a nonstarter by comparison to the physical. The successful attempts to characterise folk physics do not rely on researchers’ pre-theoretical understanding of notions like force and motion. Instead they anchor these terms by invoking fragments of physicists’ theories.
Since we as ordinary folk do not have much in the way of common knowledge of detailed psychological theories about belief, knowledge, desire, intention and the rest, it is perhaps natural to rely on philosophers instead.
Option 2: Rely on Philosophical Accounts
What anchors our understanding, as researchers, of action, belief, knowledge and the rest? Could it be philosophical accounts of these mental states?
At first glance this may seem like a mad suggestion just because there is so much apparent disagreement among philosophers.
Take intention, for example. It is not just that they disagree on whether intentions are beliefs about the future (Velleman, 1989), or belief-desire pairs (Sinhababu, 2013), or something entirely distinct from both beliefs and desires (Bratman, 1987). Nor is it just that some think of intentions as essentially components of plans (Bratman, 1987 again) whereas others do not connect intentions with plans at all (Searle, 1983). Nor is it even that there is much disagreement about how intentions relation to intentional action, to knowledge and to belief. Philosophers even disagree on whether intentions are mental states at all.
There is similar radical disagreement concerning knowledge, and concerning emotions.
So yes, it would be understandable to despair of using philosophical accounts to anchor understanding just because there is such deep and widespread disagreement among the philosophers.
But there is another, deeper reason for thinking that we cannot use philosophical accounts to anchor our understanding, as researchers, of knowledge, intention and the rest.
Philosophers have different, mostly unarticulated aims. Some philosophers seem to be proposing new ways of thinking in the hope that we adopt them. Others appear to be attempting to make explicit principles that are implicit in a particular tradition of law or in the activities of a particular historical culture. And of course some are trying to make systematic things that seem so obviously true that we can accept them without having any reason to do so (e.g. Lewis, 1969).
Further, in trusting philosophers, you do not avoid relying on individual researcher’s personal expertise. Or so Nagel argues:
‘Unless there is a special reason to think that knowledge attributions work quite differently when we are reading philosophy papers—and [there is] evidence against that sort of exceptionalism—we should expect to find that epistemic case intuitions [which are among the things that inform philosophers’ views about what knowledge is] are generated by the natural capacity responsible for our everyday attributions of states of knowledge, belief and desire. This capacity has been given various labels, including 'folk psychology', 'mindreading', and 'theory of mind’’ (Nagel, 2012, p. 510).
To be clear, let me distinguish two claims:
We could (mis)use philosophical accounts of minds and actions to characterise various models of mind.
Philosophical accounts of minds and actions anchor a shared understanding of what knowledge, belief, joy and the rest are.
I am rejecting the second claim only. (The first claim has been very good to me, and I hope to keep misusing philosophical accounts of minds and actions.)
Option 3: Rely on the Operationalization
You might object that it doesn’t matter how we characterise the models of instrumental action and mental states involved in mindreading because we already have a solidly operationalized construct, Theory of Mind.
This would be a welcome objection if true. I very much favour working back from a solid operationalization to an understanding of the things operationalized. In fact I will suggest that we can do this to some extent.
But it is important to recognise that we currently have only very limited understanding of how to operationalise mindreading.
I say this for two reasons. First, we do not really know much about the structure of Theory of Mind, and different researchers use different taxonomies (Happé, Cook, & Bird, 2017; Beaudoin, Leblanc, Gagner, & Beauchamp, 2020). Second, while there is some evidence that a wide range of false belief tasks appear to test for a single underlying competence (Flynn, 2006, p. 650; Wellman, Cross, & Watson, 2001), when we turn to theory of mind tasks more broadly we find that different theory of mind tasks appear to test for different things in the sense that an exploratory factor analysis fails to find that they load on a single factor (Warnell & Redcay, 2019).
This means that when faced with the question of what anchors our understanding, as researchers, of action, belief, knowledge and the rest it is not enough simply to point to an operationalisation. We need more.
This does not mean, of course, that operationalisations are irrelvant. Quite the opposite. Later I will suggest that both false belief tasks (Wellman et al., 2001; Flynn, 2006) and Wellman & Liu (2004)’s theory of mind scale are useful starting points.
You may encounter variations on this definition of instrumental in the literature. For instance, Dickinson (2016, p. 177) characterises instrumental actions differently: in place of the teleological ‘in order to bring about an outcome’, he stipulates that an instrumental action is one that is ‘controlled by the contingency between’ the action and an outcome. And de Wit & Dickinson (2009, p. 464) stipulate that ‘instrumental actions are learned’.
Am guilty of explicitly adopting this second option. ↩︎
Heider did not share Lewis’ assumption about being able to rely on common knowledge of platitudes alone. On Heider’s view, ‘If people were asked about these conditions they probably would not be able to make a complete list of them. Nevertheless, these assumptions are a necessary part of interpersonal relations; if we probe the events of everyday behavior, they can be brought to the surface and formulated in more precise terms’ (Heider, 1958, p. 60). ↩︎
See Beaudoin et al. (2020, p. 15): ‘The lack of theoretical structure and shared taxonomy in ToM definitions and its underlying composition impedes our ability to fully integrate ToM in a coherent and comprehensive framework linking it to various socio-cognitive abilities, a pervasive issue observed across the domain of social cognition.’ ↩︎
It is important to be clear about why this is a problem. It is not a problem that Theory of Mind may involve a variety of different processes and models, so that no single factor will explain performance across a sufficiently diverse set of tasks. But if you want to say, independently of answering the question about models, that we have a solid operationalization of Theory of Mind, then you need statistics to show that your operationalization has some kind of internal coherence. And that is what appears to be missing. ↩︎