Immorality has negative externalities which are diffuse, and hard to count, but quite possibly worse than its direct effects.
Take the example of Alice lying to Bob about something, to her benefit and his detriment. I will call the effects of the lie on Alice and Bob direct, and the effects on everybody else externalities. Concretely, the negative externalities here are that Bob is, on the margin, going to trust others in the future less for having been lied to by Alice than he would if Alice has been truthful. So in all of Bob’s future interactions, his truthful counterparties will have to work extra hard to prove that they are truthful, and maybe in some cases there are potentially beneficial deals that simply won’t occur due to Bob’s suspicions and his trying to avoid being betrayed.
This extra work that Bob’s future counterparties have to put in, as well as the lost value from missed deals, add up to a meaningful cost. This may extend beyond Bob, since everyone else who finds out that Bob was lied to by Alice will update their priors in the same direction as Bob, creating second order costs. What’s more, since everyone now thinks their counterparties suspect them of lying (marginally more), the reputational cost of doing so drops (because they already feel like they’re considered to be partially liars, so the cost of confirming that is less than if they felt they were seen as totally truthful) and as a result everyone might actually be more likely to lie.
So there’s a cost of deteriorating social trust, of p*ssing in the pool of social commons.
One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don’t actually believe, but cannot logically dismiss, is that if you’re going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.
You’re right, this is not a morality-specific phenomenon. I think there’s a general formulation of this that just has to do with signaling, though I haven’t fully worked out the idea yet.
For example, if in a given interaction it’s important for your interlocutor to believe that you’re a human and not a bot, and you have something to lose if they are skeptical of your humanity, then there’s lots of negative externalities that come from the Internet being filled with indistinguishable-from-human chatbots, irrespective its morality.
I think “trust” is what you’re looking for, and signaling is one part of developing and nurturing that trust. It’s about the (mostly correct, or it doesn’t work) belief that you can expect certain behaviors and reactions, and strongly NOT expect others. If a large percentage of online interactions are with evil intent, it doesn’t matter too much whether they’re chatbots or human-trafficked exploitation farms—you can’t trust entities that you don’t know pretty well, and who don’t share your cultural and social norms and non-official judgement mechanisms.
One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don’t actually believe, but cannot logically dismiss, is that if you’re going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.
With widespread information sharing, the ‘can’t foll all the people all the time’-logic extends to this attempt to lie without consequences: We’ll learn people ‘hide well but lie still so much’, so we’ll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the ‘not get found out’ idea (in any realistic world with imperfect hiding, anyway).
Immorality has negative externalities which are diffuse, and hard to count, but quite possibly worse than its direct effects.
Take the example of Alice lying to Bob about something, to her benefit and his detriment. I will call the effects of the lie on Alice and Bob direct, and the effects on everybody else externalities. Concretely, the negative externalities here are that Bob is, on the margin, going to trust others in the future less for having been lied to by Alice than he would if Alice has been truthful. So in all of Bob’s future interactions, his truthful counterparties will have to work extra hard to prove that they are truthful, and maybe in some cases there are potentially beneficial deals that simply won’t occur due to Bob’s suspicions and his trying to avoid being betrayed.
This extra work that Bob’s future counterparties have to put in, as well as the lost value from missed deals, add up to a meaningful cost. This may extend beyond Bob, since everyone else who finds out that Bob was lied to by Alice will update their priors in the same direction as Bob, creating second order costs. What’s more, since everyone now thinks their counterparties suspect them of lying (marginally more), the reputational cost of doing so drops (because they already feel like they’re considered to be partially liars, so the cost of confirming that is less than if they felt they were seen as totally truthful) and as a result everyone might actually be more likely to lie.
So there’s a cost of deteriorating social trust, of p*ssing in the pool of social commons.
One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don’t actually believe, but cannot logically dismiss, is that if you’re going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.
Fully agree, but I’d avoid the term “immorality”. Deviation from social norms has this cost, whether those norms are reasonable or not.
You’re right, this is not a morality-specific phenomenon. I think there’s a general formulation of this that just has to do with signaling, though I haven’t fully worked out the idea yet.
For example, if in a given interaction it’s important for your interlocutor to believe that you’re a human and not a bot, and you have something to lose if they are skeptical of your humanity, then there’s lots of negative externalities that come from the Internet being filled with indistinguishable-from-human chatbots, irrespective its morality.
I think “trust” is what you’re looking for, and signaling is one part of developing and nurturing that trust. It’s about the (mostly correct, or it doesn’t work) belief that you can expect certain behaviors and reactions, and strongly NOT expect others. If a large percentage of online interactions are with evil intent, it doesn’t matter too much whether they’re chatbots or human-trafficked exploitation farms—you can’t trust entities that you don’t know pretty well, and who don’t share your cultural and social norms and non-official judgement mechanisms.
With widespread information sharing, the ‘can’t foll all the people all the time’-logic extends to this attempt to lie without consequences: We’ll learn people ‘hide well but lie still so much’, so we’ll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the ‘not get found out’ idea (in any realistic world with imperfect hiding, anyway).