Ah, I see what you mean! Interesting perspective. The one thing I disagree with is that a “gradient” doesn’t seem like the most natural way to see it. It seems like it’s more of a binary, “Is there (accurate) modelling of the counterfactual of your choice being different going on that actually impacted the choice? If yes, it’s acausal. If not, it’s not”. This intuitively feels pretty binary to me.
I agree the gradient-of-physical-systems isn’t the most natural way to think about it; I note that it didn’t occur to me until this very conversation despite acausal trade being old hat here.
What I am thinking now is that a more natural way to think about it is overlapping abstraction space. My claim is that in order to acausally coordinate, at least one of the conditions is that all parties need to have access to the same chunk of abstraction space, somewhere in their timeline. This seems to cover the similar physical systems intuition we were talking about: two rocks with coordinate painted on them are abstractly identical, so check; two superrational AIs need the abstractions to model another superrational AI, so check. This is terribly fuzzy, but seems to allow in all the candidates for success.
The binary distinction makes sense, but I am a little confused about the work the counterfactual modeling is doing. Suppose I were to choose between two places to go to dinner, conditional on counterfactual modelling of each choice. Would this be acausal in your view?
Ah, I see what you mean! Interesting perspective. The one thing I disagree with is that a “gradient” doesn’t seem like the most natural way to see it. It seems like it’s more of a binary, “Is there (accurate) modelling of the counterfactual of your choice being different going on that actually impacted the choice? If yes, it’s acausal. If not, it’s not”. This intuitively feels pretty binary to me.
I agree the gradient-of-physical-systems isn’t the most natural way to think about it; I note that it didn’t occur to me until this very conversation despite acausal trade being old hat here.
What I am thinking now is that a more natural way to think about it is overlapping abstraction space. My claim is that in order to acausally coordinate, at least one of the conditions is that all parties need to have access to the same chunk of abstraction space, somewhere in their timeline. This seems to cover the similar physical systems intuition we were talking about: two rocks with coordinate painted on them are abstractly identical, so check; two superrational AIs need the abstractions to model another superrational AI, so check. This is terribly fuzzy, but seems to allow in all the candidates for success.
The binary distinction makes sense, but I am a little confused about the work the counterfactual modeling is doing. Suppose I were to choose between two places to go to dinner, conditional on counterfactual modelling of each choice. Would this be acausal in your view?