The transitivity of trust

Cross posted from Overcoming Bias. Comments there.

***

Suppose you tell a close friend a secret. You consider them trustworthy, and don’t fear for its release. Suppose they request to tell the secret to a friend of theirs who you don’t know. They claim this person is also highly trustworthy. I think most people would feel significantly less secure agreeing to that.

In general, people trust their friends. Their friends trust their own friends, and so on. But I think people trust friends of friends, or friends of friends of friends less than proportionally. e.g. if you act like there’s a one percent chance of your friend failing you, you don’t act like there’s 1-(.99*.99) chance of your friend’s friend failing you.

One possible explanation is that we generally expect the people we trust to have much worse judgement about who to trust than about the average thing. But why would this be so? Perhaps everyone does just have worse judgement about who to trust than they do about other things. But to account for what we observe, people would on average have to think themselves better in this regard than others. Which might not be surprising, except that they have to think themselves more better than others in this domain than in other domains. Otherwise they would just trust others less in general. Why would this be?

Another possibility I have heard suggested is that we trust our friends more than is warranted by their true probability of defecting, for non-epistemic purposes. In which case, which purposes?

Trusting a person involves choosing to make your own payoffs depend on their actions in a circumstance where it would not be worth doing so if you thought they would defect with high probability. If you think they are likely to defect, you only rely on them when there are particularly large gains from them cooperating combined with small losses from them defecting. As they become more likely to cooperate, trusting them in more cases becomes worthwhile. So trusting for non-epistemic purposes involves relying on a person in a case where their probability of defecting should make it not worthwhile, for some other gain.

What other gains might you get? Such trust might signal something, but consistently relying too much on people doesn’t seem to make one look good in any way obvious to me. It might signal to that person that you trust them, but that just brings us back to the question of how trusting people excessively might benefit you.

Maybe merely relying on a person in such a case could increase their probability of taking the cooperative action? This wouldn’t explain the intransitivity on its own, since we would need a model where trusting a friend’s friend doesn’t cause the friend’s friend to become more trustworthy.

Another possibility is that merely trusting a person does not get such a gain, but a pair trusting one another does. This might explain why you can trust your friends above their reliability, but not their friends. By what mechanism could this happen?

An obvious answer is that a pair who keep interacting might cooperate a lot more than they naturally would to elicit future cooperation from the other. So you trust your friends the correct amount, but they are unusually trustworthy toward you. My guess is that this is what happens.

So here the theory is that you trust friends substantially more than friends of friends because friends have the right incentives to cooperate, whereas friends of friends don’t. But if your friends are really cooperative, why would they give you unreliable advice – to trust their own friends?

One answer is that your friends believe trustworthiness is a property of individuals, not relationships. Since their friends are trustworthy for them, they recommend them to you. But this leaves you with the question of why your friends are wrong about this, yet you know it. Particularly since generalizing this model, everyone’s friends are wrong, and everyone knows it.

One possibility is that everyone learns these things from experience, and they categorize the events in obvious ways that are different for different people. Your friend Eric sees a series of instances of his friend James being reliable and so he feels confident that James will be reliable. You see a series of instances of different friends of friends not being especially reliable and see James most easily as one of that set. It is not that your friends are more wrong than you, but that everyone is more wrong when recommending their friends to others than when deciding whether to trust such recommendations, as a result of sample bias. Eric’s sample of James mostly contains instances of James interacting with Eric, so he does overstate James’ trustworthiness. Your sample is closer to the true distribution of James’ behavior. However you don’t have an explicit model of why your estimate differs from Eric’s, which would allow you to believe in general that friends overestimate the trustworthiness of their friends to others, and thus correct your own such biases.


No comments.