You’re right, this is not a morality-specific phenomenon. I think there’s a general formulation of this that just has to do with signaling, though I haven’t fully worked out the idea yet.
For example, if in a given interaction it’s important for your interlocutor to believe that you’re a human and not a bot, and you have something to lose if they are skeptical of your humanity, then there’s lots of negative externalities that come from the Internet being filled with indistinguishable-from-human chatbots, irrespective its morality.
I think “trust” is what you’re looking for, and signaling is one part of developing and nurturing that trust. It’s about the (mostly correct, or it doesn’t work) belief that you can expect certain behaviors and reactions, and strongly NOT expect others. If a large percentage of online interactions are with evil intent, it doesn’t matter too much whether they’re chatbots or human-trafficked exploitation farms—you can’t trust entities that you don’t know pretty well, and who don’t share your cultural and social norms and non-official judgement mechanisms.
Fully agree, but I’d avoid the term “immorality”. Deviation from social norms has this cost, whether those norms are reasonable or not.
You’re right, this is not a morality-specific phenomenon. I think there’s a general formulation of this that just has to do with signaling, though I haven’t fully worked out the idea yet.
For example, if in a given interaction it’s important for your interlocutor to believe that you’re a human and not a bot, and you have something to lose if they are skeptical of your humanity, then there’s lots of negative externalities that come from the Internet being filled with indistinguishable-from-human chatbots, irrespective its morality.
I think “trust” is what you’re looking for, and signaling is one part of developing and nurturing that trust. It’s about the (mostly correct, or it doesn’t work) belief that you can expect certain behaviors and reactions, and strongly NOT expect others. If a large percentage of online interactions are with evil intent, it doesn’t matter too much whether they’re chatbots or human-trafficked exploitation farms—you can’t trust entities that you don’t know pretty well, and who don’t share your cultural and social norms and non-official judgement mechanisms.