If you think there is a large multiverse, then there are many worlds including people very much like you in a variety of situations (this is a sense of ‘counterfactual’ which isn’t all in the mind). Suppose that you care about people who are very similar to you. Then you would like to trade with real entities in these branches, when they are able to affect something you care about. Of course any trade with them will be acausal.
In general it’s very hard to predict the relative likelihoods of different worlds, and the likelihood of agents in them predicting the existence of your world. This provides a barrier to acausal trade. Salient counterfactuals (in the ‘in the mind’ sense) give you a relatively easy way of reasoning about a slice of worlds you care about, including the fact that your putative trade partner also has a relatively easy way of reasoning about your world. This helps to enable trade between these branches.
When you say “multiverse” and “branches”, do you specifically mean the MWI?
Can you walk me through an example of a trade where you explicitly label all the moving parts with what they are and where they are?
In particular, if you assume MWI and assume that other branches exist, then people-like-you in other branches are not counterfactual because you assume they exist to start with.
I give you things, and you give me things. The result is positive sum. That’s trade. Causal and acausal trades both follow this pattern.
In the causal case, each transfer is conditional on the other transfer. Possibly in the traditional form of a barter transaction, possibly in the form of “if you don’t reciprocate, I’ll stop doing this in the future.”
In the acausal case, it’s predicated on the belief that helping out entities who reason like you do will be long-run beneficial when other entities who reason like you do help you out. There’s no specific causal chain connecting two individual transfers.
I give you things, and you give me things. The result is positive sum. That’s trade.
Provided “I” and “you” are both real, existing entities and are not counterfactuals.
If you give things to a figment of your imagination and it gives things back to you, well, either you have something going on with your tulpa or you probably should see a psychotherapist :-/
I am sorry, this makes no sense to me at all.
Playing games inside your own mind has nothing to do with trades with other real entities, acausal or not.
The first version isn’t inside your own mind.
If you think there is a large multiverse, then there are many worlds including people very much like you in a variety of situations (this is a sense of ‘counterfactual’ which isn’t all in the mind). Suppose that you care about people who are very similar to you. Then you would like to trade with real entities in these branches, when they are able to affect something you care about. Of course any trade with them will be acausal.
In general it’s very hard to predict the relative likelihoods of different worlds, and the likelihood of agents in them predicting the existence of your world. This provides a barrier to acausal trade. Salient counterfactuals (in the ‘in the mind’ sense) give you a relatively easy way of reasoning about a slice of worlds you care about, including the fact that your putative trade partner also has a relatively easy way of reasoning about your world. This helps to enable trade between these branches.
When you say “multiverse” and “branches”, do you specifically mean the MWI?
Can you walk me through an example of a trade where you explicitly label all the moving parts with what they are and where they are?
In particular, if you assume MWI and assume that other branches exist, then people-like-you in other branches are not counterfactual because you assume they exist to start with.
I give you things, and you give me things. The result is positive sum. That’s trade. Causal and acausal trades both follow this pattern.
In the causal case, each transfer is conditional on the other transfer. Possibly in the traditional form of a barter transaction, possibly in the form of “if you don’t reciprocate, I’ll stop doing this in the future.”
In the acausal case, it’s predicated on the belief that helping out entities who reason like you do will be long-run beneficial when other entities who reason like you do help you out. There’s no specific causal chain connecting two individual transfers.
Provided “I” and “you” are both real, existing entities and are not counterfactuals.
If you give things to a figment of your imagination and it gives things back to you, well, either you have something going on with your tulpa or you probably should see a psychotherapist :-/