I don’t think you are fully getting what I am saying, though that’s understandable because I haven’t added any info on what makes a valid enemy.
I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.
Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can’t align those interests it isn’t the end of the world. The big problem is lies and sloppy reasoning that go beyond defending one’s own interests into causing unnecessary collateral damage for large groups. The entire framework here is premised on the same distinction you seem to think I don’t have in mind… which is fair because it was unstated. XD
The big focus is a form of cooperation between enemies to reduce large scale indiscriminate collateral damage of dishonesty. It is easier to start this cooperation between actors that are relatively more aligned, before scaling to actors that are relatively less aligned with each other. Do you sense any floating disagreements remaining?
I think if you frame it as “every transaction and relationship has elements of cooperation and competition, so every communication has a need for truth and deception.”, and then explore the specific types of trust and conflict, and how they impact the dimensions of communication, we’d be in excellent-post territory.
The bounds of understanding in humans mean that we simply don’t know the right balance of cooperation and competition. So we have, at best, some wild guesses as to what’s collateral damage vs what’s productive advantage over our opponents. I’d argue that there’s an amazing amount of self-deception in humans, and I take a Schelling Fence approach to that—I don’t understand the protection and benefit to others’ self-deception and maintained internal inconsistency, so I hesitate to unilaterally decry it. In myself, I strive to keep self-talk and internal models as accurate as possible, and that includes permission to lie without hesitation when I think it’s to my advantage.
I don’t think you are fully getting what I am saying, though that’s understandable because I haven’t added any info on what makes a valid enemy.
I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.
Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can’t align those interests it isn’t the end of the world. The big problem is lies and sloppy reasoning that go beyond defending one’s own interests into causing unnecessary collateral damage for large groups. The entire framework here is premised on the same distinction you seem to think I don’t have in mind… which is fair because it was unstated. XD
The big focus is a form of cooperation between enemies to reduce large scale indiscriminate collateral damage of dishonesty. It is easier to start this cooperation between actors that are relatively more aligned, before scaling to actors that are relatively less aligned with each other. Do you sense any floating disagreements remaining?
I think if you frame it as “every transaction and relationship has elements of cooperation and competition, so every communication has a need for truth and deception.”, and then explore the specific types of trust and conflict, and how they impact the dimensions of communication, we’d be in excellent-post territory.
The bounds of understanding in humans mean that we simply don’t know the right balance of cooperation and competition. So we have, at best, some wild guesses as to what’s collateral damage vs what’s productive advantage over our opponents. I’d argue that there’s an amazing amount of self-deception in humans, and I take a Schelling Fence approach to that—I don’t understand the protection and benefit to others’ self-deception and maintained internal inconsistency, so I hesitate to unilaterally decry it. In myself, I strive to keep self-talk and internal models as accurate as possible, and that includes permission to lie without hesitation when I think it’s to my advantage.