I agree that not everyone will be interested in engaging in counterfactual trade. I gestured towards some reasons why you might be:
Agents might engage in counterfactual trade either because they do care about the agents in the counterfactuals (at least seems plausible for some beliefs about a large multiverse), or because it’s instrumentally useful as a tractable decision rule which works as a better approximation to what they’d ideally like to do than similarly tractable versions.
Yes. Notice: a real friend, not a counterfactual one. Also “I knew I would never be repaid” makes this not a trade but just an altruistic act.
And “they would have done the same for me” is just games you play in your head. It could just as easily be “He probably wouldn’t do that for me, but I don’t care, he’s my friend”.
No, your real friend is the one you helped. The friend that helps you in a counterfactual situation where you are in trouble is just in your head, not real. Your counterfactual friend helps you, but in return you help your real friend. The benefit you get is that once you really are in trouble, the future version of your friend is similar enough to the counterfactual friend that he really will help you. The better you know your friend, the likelier this is.
I’m not saying that that isn’t a bit silly. But I think it’s coherent. In fact it might be just a geeky way to describe how people often think in reality.
The direct interpretation is that “they” are people elsewhere in a large multiverse. That they could be pictured as a figment of imagination gives the agent evidence about their existence.
The instrumental interpretation is that one acts as though trading with the figment of one’s imagination, as a method of trade with other real people (who also act this way), because it is computationally tractable and tends to produce better outcomes all-round.
If you think there is a large multiverse, then there are many worlds including people very much like you in a variety of situations (this is a sense of ‘counterfactual’ which isn’t all in the mind). Suppose that you care about people who are very similar to you. Then you would like to trade with real entities in these branches, when they are able to affect something you care about. Of course any trade with them will be acausal.
In general it’s very hard to predict the relative likelihoods of different worlds, and the likelihood of agents in them predicting the existence of your world. This provides a barrier to acausal trade. Salient counterfactuals (in the ‘in the mind’ sense) give you a relatively easy way of reasoning about a slice of worlds you care about, including the fact that your putative trade partner also has a relatively easy way of reasoning about your world. This helps to enable trade between these branches.
When you say “multiverse” and “branches”, do you specifically mean the MWI?
Can you walk me through an example of a trade where you explicitly label all the moving parts with what they are and where they are?
In particular, if you assume MWI and assume that other branches exist, then people-like-you in other branches are not counterfactual because you assume they exist to start with.
I give you things, and you give me things. The result is positive sum. That’s trade. Causal and acausal trades both follow this pattern.
In the causal case, each transfer is conditional on the other transfer. Possibly in the traditional form of a barter transaction, possibly in the form of “if you don’t reciprocate, I’ll stop doing this in the future.”
In the acausal case, it’s predicated on the belief that helping out entities who reason like you do will be long-run beneficial when other entities who reason like you do help you out. There’s no specific causal chain connecting two individual transfers.
I give you things, and you give me things. The result is positive sum. That’s trade.
Provided “I” and “you” are both real, existing entities and are not counterfactuals.
If you give things to a figment of your imagination and it gives things back to you, well, either you have something going on with your tulpa or you probably should see a psychotherapist :-/
I don’t quite understand this. “Counterfactual” means “does not exist” and “something made up”.
B’ comes from the counterfactual scenario Y which means that neither Y nor B’ exist in reality and they are just figments of A’s imagination.
I agree that not everyone will be interested in engaging in counterfactual trade. I gestured towards some reasons why you might be:
My question isn’t why would someone be interested, my question is how one can engage in trade with a figment of one’s own imagination.
The result ends up looking like “I know they would have done the same for me”.
Who are “they”?
“My friend was in trouble, so I helped them out, even though I knew I would never be repaid. I know they would have done the same for me.”
Yes. Notice: a real friend, not a counterfactual one. Also “I knew I would never be repaid” makes this not a trade but just an altruistic act.
And “they would have done the same for me” is just games you play in your head. It could just as easily be “He probably wouldn’t do that for me, but I don’t care, he’s my friend”.
No, your real friend is the one you helped. The friend that helps you in a counterfactual situation where you are in trouble is just in your head, not real. Your counterfactual friend helps you, but in return you help your real friend. The benefit you get is that once you really are in trouble, the future version of your friend is similar enough to the counterfactual friend that he really will help you. The better you know your friend, the likelier this is.
I’m not saying that that isn’t a bit silly. But I think it’s coherent. In fact it might be just a geeky way to describe how people often think in reality.
That seems like a remarkably convoluted way to describe a trivial situation where you help a member of your in-group.
The direct interpretation is that “they” are people elsewhere in a large multiverse. That they could be pictured as a figment of imagination gives the agent evidence about their existence.
The instrumental interpretation is that one acts as though trading with the figment of one’s imagination, as a method of trade with other real people (who also act this way), because it is computationally tractable and tends to produce better outcomes all-round.
I am sorry, this makes no sense to me at all.
Playing games inside your own mind has nothing to do with trades with other real entities, acausal or not.
The first version isn’t inside your own mind.
If you think there is a large multiverse, then there are many worlds including people very much like you in a variety of situations (this is a sense of ‘counterfactual’ which isn’t all in the mind). Suppose that you care about people who are very similar to you. Then you would like to trade with real entities in these branches, when they are able to affect something you care about. Of course any trade with them will be acausal.
In general it’s very hard to predict the relative likelihoods of different worlds, and the likelihood of agents in them predicting the existence of your world. This provides a barrier to acausal trade. Salient counterfactuals (in the ‘in the mind’ sense) give you a relatively easy way of reasoning about a slice of worlds you care about, including the fact that your putative trade partner also has a relatively easy way of reasoning about your world. This helps to enable trade between these branches.
When you say “multiverse” and “branches”, do you specifically mean the MWI?
Can you walk me through an example of a trade where you explicitly label all the moving parts with what they are and where they are?
In particular, if you assume MWI and assume that other branches exist, then people-like-you in other branches are not counterfactual because you assume they exist to start with.
I give you things, and you give me things. The result is positive sum. That’s trade. Causal and acausal trades both follow this pattern.
In the causal case, each transfer is conditional on the other transfer. Possibly in the traditional form of a barter transaction, possibly in the form of “if you don’t reciprocate, I’ll stop doing this in the future.”
In the acausal case, it’s predicated on the belief that helping out entities who reason like you do will be long-run beneficial when other entities who reason like you do help you out. There’s no specific causal chain connecting two individual transfers.
Provided “I” and “you” are both real, existing entities and are not counterfactuals.
If you give things to a figment of your imagination and it gives things back to you, well, either you have something going on with your tulpa or you probably should see a psychotherapist :-/