I don’t claim that signaling is the only path, nor that humans are correct in their decision-making on these topics. I only claim that there are causal reasons for the choices which explain things better than acausal coordination.
Mostly I want to support your prior that “acausal coordination is a weird thing that AIs might do in the future (or more generally that may apply in very rare cases, but will be extremely hard to find clear examples of)”.
I don’t claim that signaling is the only path, nor that humans are correct in their decision-making on these topics. I only claim that there are causal reasons for the choices which explain things better than acausal coordination.
Mostly I want to support your prior that “acausal coordination is a weird thing that AIs might do in the future (or more generally that may apply in very rare cases, but will be extremely hard to find clear examples of)”.
I guess I’m just not following what the causal reasons are here?