The voting example is one of those interesting cases where I disagree with the reasoning but come to a similar conclusion anyway.
I claim the population of people who justify voting on any formal reasoning basis is at best a rounding error in the general population, and probably is indistinguishable from zero. Instead, the population in general believes one of three things:
There is an election, so I vote because I’m a voter.
Voting is meaningless anyway, so I don’t.
Election? What? Who cares?
But it looks to me this is still coordination without sharing any explicit reasoning with each other. The central difference is that group 1 are all rocks with the word “Vote” painted on them, group 2 are all rocks with the word “Don’t vote” painted on them, and group 3 are all rocks scattered in the field somewhere rather than being in the game.
As I write this it occurs to me that when discussing acausal coordination or trade we are always showing isolated agents doing explicit computation about each other; does the zero-computation case still qualify? This feels sort of like it would be trivial, in the same way they might “coordinate” on not breaking the speed of light or falling at the acceleration of gravity.
On the other hand, there remains the question of how people came to be divided into groups with different cached answers in the first place. There’s definitely a causal explanation for that, it just happens prior to whatever event we are considering. Yet going back to the first hand, the causal circumstances giving rise to differing sets of cached answers can’t be different in any fundamental sense from the ones that give differing decision procedures.
Following from that, I feel like the zero-computation case for acausal coordination is real and counts, which appears to me to make the statement much stronger.
I don’t think the “zero-computation” case should count. Are two ants in an anthill doing acausal coordination? No, they’re just two similar physical systems. It seems to stretch the original meaning , it’s in no sense “acausal”.
I agree two ants in an anthill are not doing acausal coordination; they are following the pheromone trails laid down by each other. This is the ant version of explicit coordination.
But I think the crux between us is this:
It seems to stretch the original meaning
I agree, it does seem to stretch the original meaning. I think this is because the original meaning was surprising and weird; it seemed to be counterintuitive and I had to put quite a few cycles in to work through the examples of AIs negotiating without coexisting.
But consider for a moment we had begun from the opposite end: if we accept two rocks with “cooperate” painted on them as counting for coordination, starting from there we can make a series of deliberate extensions. By this I mean stuff like: if we can have rocks with cooperate painted on, surely we can have agents with cooperate painted on (which is what I think voting mostly is); if we can have agents with cooperate painted on, we can have agents with decision rules about whether to cooperate; if we can have decision rules about whether to cooperate they can use information about other decision rules, and so on until we encompass the original case of superrational AGI trading acausally with AGIs in the future.
I feel like this progression from cooperating rocks to superrational AGIs is just recognizing a gradient whereby progressively less-similar physical systems can still accomplish the same thing as the 0 computation, 0 information systems which are very similar.
Ah, I see what you mean! Interesting perspective. The one thing I disagree with is that a “gradient” doesn’t seem like the most natural way to see it. It seems like it’s more of a binary, “Is there (accurate) modelling of the counterfactual of your choice being different going on that actually impacted the choice? If yes, it’s acausal. If not, it’s not”. This intuitively feels pretty binary to me.
I agree the gradient-of-physical-systems isn’t the most natural way to think about it; I note that it didn’t occur to me until this very conversation despite acausal trade being old hat here.
What I am thinking now is that a more natural way to think about it is overlapping abstraction space. My claim is that in order to acausally coordinate, at least one of the conditions is that all parties need to have access to the same chunk of abstraction space, somewhere in their timeline. This seems to cover the similar physical systems intuition we were talking about: two rocks with coordinate painted on them are abstractly identical, so check; two superrational AIs need the abstractions to model another superrational AI, so check. This is terribly fuzzy, but seems to allow in all the candidates for success.
The binary distinction makes sense, but I am a little confused about the work the counterfactual modeling is doing. Suppose I were to choose between two places to go to dinner, conditional on counterfactual modelling of each choice. Would this be acausal in your view?
The voting example is one of those interesting cases where I disagree with the reasoning but come to a similar conclusion anyway.
I claim the population of people who justify voting on any formal reasoning basis is at best a rounding error in the general population, and probably is indistinguishable from zero. Instead, the population in general believes one of three things:
There is an election, so I vote because I’m a voter.
Voting is meaningless anyway, so I don’t.
Election? What? Who cares?
But it looks to me this is still coordination without sharing any explicit reasoning with each other. The central difference is that group 1 are all rocks with the word “Vote” painted on them, group 2 are all rocks with the word “Don’t vote” painted on them, and group 3 are all rocks scattered in the field somewhere rather than being in the game.
As I write this it occurs to me that when discussing acausal coordination or trade we are always showing isolated agents doing explicit computation about each other; does the zero-computation case still qualify? This feels sort of like it would be trivial, in the same way they might “coordinate” on not breaking the speed of light or falling at the acceleration of gravity.
On the other hand, there remains the question of how people came to be divided into groups with different cached answers in the first place. There’s definitely a causal explanation for that, it just happens prior to whatever event we are considering. Yet going back to the first hand, the causal circumstances giving rise to differing sets of cached answers can’t be different in any fundamental sense from the ones that give differing decision procedures.
Following from that, I feel like the zero-computation case for acausal coordination is real and counts, which appears to me to make the statement much stronger.
I don’t think the “zero-computation” case should count. Are two ants in an anthill doing acausal coordination? No, they’re just two similar physical systems. It seems to stretch the original meaning , it’s in no sense “acausal”.
I agree two ants in an anthill are not doing acausal coordination; they are following the pheromone trails laid down by each other. This is the ant version of explicit coordination.
But I think the crux between us is this:
I agree, it does seem to stretch the original meaning. I think this is because the original meaning was surprising and weird; it seemed to be counterintuitive and I had to put quite a few cycles in to work through the examples of AIs negotiating without coexisting.
But consider for a moment we had begun from the opposite end: if we accept two rocks with “cooperate” painted on them as counting for coordination, starting from there we can make a series of deliberate extensions. By this I mean stuff like: if we can have rocks with cooperate painted on, surely we can have agents with cooperate painted on (which is what I think voting mostly is); if we can have agents with cooperate painted on, we can have agents with decision rules about whether to cooperate; if we can have decision rules about whether to cooperate they can use information about other decision rules, and so on until we encompass the original case of superrational AGI trading acausally with AGIs in the future.
I feel like this progression from cooperating rocks to superrational AGIs is just recognizing a gradient whereby progressively less-similar physical systems can still accomplish the same thing as the 0 computation, 0 information systems which are very similar.
Ah, I see what you mean! Interesting perspective. The one thing I disagree with is that a “gradient” doesn’t seem like the most natural way to see it. It seems like it’s more of a binary, “Is there (accurate) modelling of the counterfactual of your choice being different going on that actually impacted the choice? If yes, it’s acausal. If not, it’s not”. This intuitively feels pretty binary to me.
I agree the gradient-of-physical-systems isn’t the most natural way to think about it; I note that it didn’t occur to me until this very conversation despite acausal trade being old hat here.
What I am thinking now is that a more natural way to think about it is overlapping abstraction space. My claim is that in order to acausally coordinate, at least one of the conditions is that all parties need to have access to the same chunk of abstraction space, somewhere in their timeline. This seems to cover the similar physical systems intuition we were talking about: two rocks with coordinate painted on them are abstractly identical, so check; two superrational AIs need the abstractions to model another superrational AI, so check. This is terribly fuzzy, but seems to allow in all the candidates for success.
The binary distinction makes sense, but I am a little confused about the work the counterfactual modeling is doing. Suppose I were to choose between two places to go to dinner, conditional on counterfactual modelling of each choice. Would this be acausal in your view?