I dunno—my intuitions don’t really apply to distant and convoluted scenarios, so I’m deeply suspicious of any argument that I (or “an agent” that I care anything about) should do something non-obvious. I will say that’s not acausal—decisions clearly have causal effects. It’s non-communicative, which IS a problem, but a very different one. Thomas Schelling has things to say on that topic, and it’s quite likely that randomness is the wrong mechanism here, and finding ANY common knowledge can lead to finding a Schelling point that has a higher chance of both picking the same color.
I dunno—my intuitions don’t really apply to distant and convoluted scenarios, so I’m deeply suspicious of any argument that I (or “an agent” that I care anything about) should do something non-obvious. I will say that’s not acausal—decisions clearly have causal effects. It’s non-communicative, which IS a problem, but a very different one. Thomas Schelling has things to say on that topic, and it’s quite likely that randomness is the wrong mechanism here, and finding ANY common knowledge can lead to finding a Schelling point that has a higher chance of both picking the same color.