Paul Faulkner explains why trusting strangers can be the rational thing to do

Consider the trust that is found between people; the trust that is manifest when you trust someone to do something. You might leave your closed diary on a desk where you know your partner will see it, not ask for a second quote when a mechanic says your car needs lots of work, shake hands on a deal, follow a stranger’s directions, or purchase a good that will be delivered later. Examples such as these could be multiplied endlessly because, thankfully, such interpersonal trust is a pervasive feature of our everyday lives. What such examples involve is an act of reliance. For instance, you rely on you partner not reading your diary, and you rely on the mechanic not giving you an inflated quote. Insofar as you trust them in these respects, you are happy to rely on them in these ways; your reliance is willing, rather than forced. In trusting, you do not feel the need to take any precautions, such as keeping your diary in a locked desk or getting a second quote. You do not feel the need because when you trust, you presume that the trusted is trustworthy. In presuming this, you presume that your partner or mechanic, for instance, will do the right thing, where we think that the right thing means not reading another’s private diary or giving an inflated quote. But note that these cases have a worst outcome: you leave your diary on the desk and it gets read, you don’t get a second quote and are stung, and so on. So precautions can seem justifiable. And if precautions are not taken, or not possible, then what seems to be needed is some grounds for judging that the other party will do the right thing. The problem of trust then starts when we begin to think about what grounds we have for this judgement.

This problem of trust can be illustrated by an experiment called the trust game. In this game there are two players: a trusting party or ‘investor’, let’s call her Ivy, and a trusted party or ‘trustee’ we’ll call Tony. In the simplest version, Ivy has a certain endowment, say £100, and an option of keeping this or transferring part or all of it to Tony and keeping whatever remains. The money Ivy transfers to Tony gets multiplied, say by a factor of four, and Tony then has the option of keeping the resulting sum or returning part or all of it. In this case, reliance comes with a worst outcome, since if Tony keeps all the windfall, Ivy loses whatever money is transferred. But a good outcome for Ivy would be to give Tony everything and hope he splits the £400 windfall. If he does share, both do well and the game has resulted in a cooperative outcome.

So the question is, what reason does Ivy have for thinking that Tony will be cooperative? Or, equivalently, what reason does Ivy have for relying on Tony to split any windfall? If the game were repeated, so there were a series of transactions, then this would give Ivy grounds for thinking that the cooperative outcome will result. For were Tony to keep all the monies in one game, Ivy wouldn’t transfer any in the next or future games, so he would lose out on these future transfers. So Tony has good reason to play the game cooperatively, and Ivy knows this. However, if the game is played just the one time, then it seems that it is reasonable for Ivy to make a transfer only if she has grounds for thinking that Tony will prove cooperative.

Thankfully, in our everyday lives we have lots of grounds for such judgements. People cooperate out of fear of sanctions, out of particular self-interest, and because they value both cooperation and friendly relations. And we are aware of all these motivating reasons. Thus you will almost certainly have good grounds for thinking that your partner will respect your privacy and that your mechanic is honest. However, suppose that none of these usual grounds are available to Ivy. Suppose—as is meant to be the case in the experimental set up—that she knows nothing about Tony and so is ignorant of what does or might move him. Given this assumption, it becomes hard to see how it could be reasonable for Ivy to make any transfer. In this position of ignorance, when it is known that the game is a one-off, it is surely better for someone in Ivy’s position just to keep the money.

Generalizing from the trust game, reliance is problematic—there is a problem of trust—whenever (1) we need to rely on another person but recognize that doing so could have a bad outcome, (2) we know that this interaction is one-off, and (3) we are entirely ignorant of the other person’s individual motivations but recognize a general motivation to be unreliable. It is under these conditions that cooperation seems unreasonable. It follows both that there should be little trust in the trust game, and that trust should be limited in everyday life.

What’s surprising then is that neither of these conclusions fit the empirical data: the experimental result is that people by and large trust one another when playing the trust game; and interpersonal trust is near ubiquitous in everyday life, far outstripping what grounds we have for any judgement of a trusted party’s interests, motives, reliability, or trustworthiness. The issue is how to reconcile these empirical facts with the philosophical result that reliance, and so trust, seems irrational.

What is needed for this reconciliation is some account of how interacting parties—for instance, the investor and trustee (Ivy and Tony), or you and the mechanic—have a reason to cooperate in every case. If there is such a reason, then there will be a reason to cooperate even under conditions of ignorance in the one-off case, and so there would be no problem of trust. The beginnings of this account can be found in Bernard Williams’s discussion of sincerity.

To illustrate how sincerity fits into this discussion, suppose that the investor, Ivy, and trustee, Tony, were allowed a single communication where Ivy asked, ‘Will you make an equitable return transfer?’, and Tony answered, ‘Yes’. In this case, Ivy’s interest is to know the truth and Tony’s sincerity is valued to the extent that it allows this. But if sincerity has only this instrumental value—that is, if sincerity is valued only as a means to an end—then this single communication doesn’t change anything. It remains true that the interests of Ivy and Tony pull in different directions, and that Tony has no reason to tell the truth. The question ‘Should Ivy make the transfer?’ simply becomes the question ‘Should Ivy believe Tony?’, and the answer to both seems to be no. Thus Williams goes on to argue that this problem of trust cannot be resolved while sincerity is only given instrumental value. Given that a functioning society requires the successful communication of information, what is needed is that sincerity be regarded as intrinsically valuable. Our society achieves this, Williams proposes, through valuing trust and trustworthiness, and seeing sincerity as an instance of the latter: sincerity as trustworthiness in speech.

That we value trust and trustworthiness intrinsically—that the description of an act or person as ‘trusting’ or ‘trustworthy’ amounts to a positive evaluation of it or them—has been unfortunately obscured by the philosophical tendency to think about trust as simply Ivy trusting Tony to split the windfall or, more generally, the investor trusting the trustee to behave in a certain way. That is, the value for Ivy in trusting Tony is fundamentally the value of the goods that come to Ivy from Tony’s sharing. The value of trusting is thereby instrumental: it is the good things that might come from trusting that are valued, rather than the trust itself.

The problem of trust arises when we value trust only as a means to an end, and add in the philosophical assumption that it is our interests alone that explain our actions. Since our interests can pull in different directions and we think that the rational thing to do is for each of us to attempt to satisfy these incompatible interests, then we need a reason for thinking that interests don’t pull apart in cases like that of Ivy and Tony. And when we lack such a reason, reliance, and so trust, becomes problematic. But this philosophical assumption that institutes the demand for this reason simply misses how our valuing trust determines that trust itself can be a reason that explains action.

In order to recognize how we value trust in itself, and identify this reason, we must shift focus from Ivy’s trusting Tony to share to the attitude behind this act of reliance. Any value intrinsic to trust must adhere simply to Ivy’s having the trusting attitude—that is, in her trusting Tony, or just in her trusting. It must adhere to the background attitude found in cases of trusting someone to do something. Focusing on this background attitude then shows that trust, at heart, is an optimistic attitude of goodwill, which may take specific persons as its object and which can support specific acts of reliance. The reason that is present in every potentially cooperative encounter is then the reason that is supplied by this optimistic attitude.

To elaborate, suppose that Ivy approaches the trust game with an attitude of trust. In this case, when given the choice of making a transfer or not, she will presume that Tony will do the right or trustworthy thing and return a fair share of the money gained. In having an attitude of trust, Ivy thinks well of Tony, and this presumption articulates what it is to be optimistic and think well of the trustee in the context that is the trust game. This presumption that the trustee will do the trustworthy thing then gives the investor a reason to make a transfer. Even in the one-off game, played under conditions of ignorance, the presumption of trustworthiness is available, so that Ivy has reason to transfer money to Tony.

We value trust in its own right. And this gives us a reason to rely on the people we trust to do the things we trust them to do. On this account, it is trust itself—trust as an attitude simpliciter—that supplies our reasons for trusting people to do things. It is not an assessment of other people’s motives or interests, our access to which can frequently be lacking. This is not to conclude that the problem of trust is not a fundamental problem for cooperative society. It is. Rather, it is to say that we are lucky to live in what Williams calls ‘better times’, where we have a social evaluative outlook—which is a view of trust—that renders this problem solvable. We shouldn’t take this for granted.

Paul Faulkner is Reader in Philosophy at the University of Sheffield. This essay is based on his recent book The Philosophy of Trust (OUP, 2017), co-edited with Thomas Simpson. His research interests include the role of testimony as a source of knowledge, our collective knowledge of things, and what it is that makes lying wrong.