Counterfactuals relevant to decision making in this context are not other MWI branches, but other multiverse descriptions (partial models), with different amplitudes inside them. You are not affecting other branches from your branch, you are determining which multiverse takes place, depending on what your abstract decision making computation does.
This computation (you) is not in itself located in a particular multiverse or on a particular branch, it’s some sort of mathematical gadget. Which could be considered (reasoned about, simulated) from many places where it’s thereby embedded, including counterfactual places where it can’t rule out the possibility of being embedded for purposes of decision making.
With acausal trade, the trading partners (agents) are these mathematical gadgets, not their instances in their respective worlds. Say, there are agents A1 and A2, which have instances I1 and I2 in worlds W1 and W2. Then I1 is not an agent in this sense, and not a party to an acausal trade between them, it’s merely a way for A1 to control W1 (in the usual causal sense). To facilitate acausal trade, A1 and A2 need to reason about each other, but at the end of the day the deal gets executed by I1 and I2 on behalf of their respective abstract masters.
This setup becomes more practical if we start with I1 and I2 (instead of A1 and A2) and formulate common knowledge they have about each other as the abstract gadget A that facilitates coordination between them, with the part of its verdict that I1 would (commit in advance to) follow being A1 (by construction), and the part that I2 would follow being A2. This shared gadget A is an adjudicator between I1 and I2, and it doesn’t need to be anywhere near as complicated as them, it only gets to hold whatever common knowledge they happen to have about each other, even if it’s very little. It’s a shared idea they both follow (and know each other to be following, etc.) and thus coordinate on.
I see, thanks for this comment. But can humans be considered as possessing an abstract decision making computation? It seems that due to quantum mechanics it’s impossible to predict the decision of a human perfectly even if you have the complete initial conditions.
Counterfactuals relevant to decision making in this context are not other MWI branches, but other multiverse descriptions (partial models), with different amplitudes inside them. You are not affecting other branches from your branch, you are determining which multiverse takes place, depending on what your abstract decision making computation does.
This computation (you) is not in itself located in a particular multiverse or on a particular branch, it’s some sort of mathematical gadget. Which could be considered (reasoned about, simulated) from many places where it’s thereby embedded, including counterfactual places where it can’t rule out the possibility of being embedded for purposes of decision making.
With acausal trade, the trading partners (agents) are these mathematical gadgets, not their instances in their respective worlds. Say, there are agents A1 and A2, which have instances I1 and I2 in worlds W1 and W2. Then I1 is not an agent in this sense, and not a party to an acausal trade between them, it’s merely a way for A1 to control W1 (in the usual causal sense). To facilitate acausal trade, A1 and A2 need to reason about each other, but at the end of the day the deal gets executed by I1 and I2 on behalf of their respective abstract masters.
This setup becomes more practical if we start with I1 and I2 (instead of A1 and A2) and formulate common knowledge they have about each other as the abstract gadget A that facilitates coordination between them, with the part of its verdict that I1 would (commit in advance to) follow being A1 (by construction), and the part that I2 would follow being A2. This shared gadget A is an adjudicator between I1 and I2, and it doesn’t need to be anywhere near as complicated as them, it only gets to hold whatever common knowledge they happen to have about each other, even if it’s very little. It’s a shared idea they both follow (and know each other to be following, etc.) and thus coordinate on.
I see, thanks for this comment. But can humans be considered as possessing an abstract decision making computation? It seems that due to quantum mechanics it’s impossible to predict the decision of a human perfectly even if you have the complete initial conditions.