I’m trying to set up a sufficiently inconvenient possible world by introducing additional assumptions. The one about MWI stops the excuse of there being other real you in the other MWI branches who do receive the $10000. Not allowed.
How do you pick the threshold, decide that [$10000] < [decision threshold] < [your life]?
You’ve actually made it an easier problem for me, though, because I regard my alternate selves as other people.
How do you peak the threshold, decide that [$10000] < [decision threshold] < [your life]?
If it were possible for me to make a deal with my alternate self by which I get a few thousand dollars, I would obviously surrender my $100. As it isn’t possible, I see little reason to give someone otherwise destined to be forever causally isolated from me $10000 at the cost of $100. I wouldn’t keep $100 if it meant he lost $10000, either. I probably would keep the $100 if they lost less than $100. If my alternate self stood to gain, say, a million dollars, but nothing if I kept my $100, then I probably would give it up. But that would be as a whimsy, something to think about and feel good. But the benefit to me of that whimsy would have to be worth more than $100.
The pattern behind my choices is that the pain experienced by my alternate self (who, recall, I consider a different person) in any of these cases is never more than $100. I think this is the most we can expect, on average, of other intelligent beings: that they will not inflict a large loss for a small gain. Why not steal, in that case? Because there is, in fact, no such thing as total future causal isolation.
There is no alternative self. None at all. The alternative may be impossible according to the laws of physics. It is only present in your imperfect model of the world. You can’t trade with a fiction, and you shouldn’t emphasize with a fiction. What you decide, you decide in this our real world. You decide that it is right to make a sacrifice, according to your preferences that only live in your model of the world, but speak about the reality.
I think that this is a critical point, worthy of a blog post of its own.
Impossible possible worlds are a confusion. The inclination to trade with fiction seems like a serious problem within this community.
My preferences don’t involve me sacrificing unless someone can get hurt. It doesn’t matter whether that person exists in another Everett branch, within Omega or in another part of the Tegmark ensemble, but there must be a someone. I’ll play symmetrist with everyone else (which is, in a nutshell, what I said in my comment above) but not with myself. You seem to want a person that is me, but minus the “existence” property. I don’t think that is a coherent concept.
OK, suppose that Omega came along right now and said to me “I have determined that if you could be persuaded that your actions would have no consequence, and then given the problem you are currently discussing, you would in every case keep $100. Therefore I will torture you endlessly.” I would not see this as proof of my irrationality (in the sense of hopelessly failing to achieve my preferences). I don’t think that such a sequence of events is germane to the problem as you see it, but I also don’t see how it is not germane.
How much do you know about many worlds, anyways?
My alternate self very much does exist, the technical term is possibility-cloud which will eventually diverge noticeably but which for now is just barely distinguishable from me.
Vladimir_Nesov!2009 knew more than enough about Many Worlds to know how to exclude it as a consideration. Vladimir_Nesov!2013 probably hasn’t forgotten.
My alternate self very much does exist, the technical term is possibility-cloud which will eventually diverge noticeably but which for now is just barely distinguishable from me.
No. It doesn’t exist. Not all uncertainty represents knowledge about quantum events which will have significant macroscopic relevance. Some represents mere ignorance. This ignorance can be about events that are close to deterministic—that means the ‘alternate selves’ have negligible measure and even less decision theoretic relevance. Other uncertainty represents logical uncertainty. That is, where the alternate selves don’t even exist in the trivial irrelevant sense. It was just that the participant didn’t know that “2+2=4” yet.
Given that many-worlds is true, yes. Invoking it kind of defeats the purpose of the decision theory problem though, as it is meant as a test of reflective consistency (i.e. you are supposed to assume you prefer $100>$0 in this world regardless of any other worlds).
I’m trying to set up a sufficiently inconvenient possible world by introducing additional assumptions. The one about MWI stops the excuse of there being other real you in the other MWI branches who do receive the $10000. Not allowed.
How do you pick the threshold, decide that [$10000] < [decision threshold] < [your life]?
You’ve actually made it an easier problem for me, though, because I regard my alternate selves as other people.
How do you peak the threshold, decide that [$10000] < [decision threshold] < [your life]?
If it were possible for me to make a deal with my alternate self by which I get a few thousand dollars, I would obviously surrender my $100. As it isn’t possible, I see little reason to give someone otherwise destined to be forever causally isolated from me $10000 at the cost of $100. I wouldn’t keep $100 if it meant he lost $10000, either. I probably would keep the $100 if they lost less than $100. If my alternate self stood to gain, say, a million dollars, but nothing if I kept my $100, then I probably would give it up. But that would be as a whimsy, something to think about and feel good. But the benefit to me of that whimsy would have to be worth more than $100.
The pattern behind my choices is that the pain experienced by my alternate self (who, recall, I consider a different person) in any of these cases is never more than $100. I think this is the most we can expect, on average, of other intelligent beings: that they will not inflict a large loss for a small gain. Why not steal, in that case? Because there is, in fact, no such thing as total future causal isolation.
There is no alternative self. None at all. The alternative may be impossible according to the laws of physics. It is only present in your imperfect model of the world. You can’t trade with a fiction, and you shouldn’t emphasize with a fiction. What you decide, you decide in this our real world. You decide that it is right to make a sacrifice, according to your preferences that only live in your model of the world, but speak about the reality.
I think that this is a critical point, worthy of a blog post of its own. Impossible possible worlds are a confusion.
The inclination to trade with fiction seems like a serious problem within this community.
I’ve misunderstood you to an extent, then.
My preferences don’t involve me sacrificing unless someone can get hurt. It doesn’t matter whether that person exists in another Everett branch, within Omega or in another part of the Tegmark ensemble, but there must be a someone. I’ll play symmetrist with everyone else (which is, in a nutshell, what I said in my comment above) but not with myself. You seem to want a person that is me, but minus the “existence” property. I don’t think that is a coherent concept.
OK, suppose that Omega came along right now and said to me “I have determined that if you could be persuaded that your actions would have no consequence, and then given the problem you are currently discussing, you would in every case keep $100. Therefore I will torture you endlessly.” I would not see this as proof of my irrationality (in the sense of hopelessly failing to achieve my preferences). I don’t think that such a sequence of events is germane to the problem as you see it, but I also don’t see how it is not germane.
How much do you know about many worlds, anyways? My alternate self very much does exist, the technical term is possibility-cloud which will eventually diverge noticeably but which for now is just barely distinguishable from me.
there you go.
Vladimir_Nesov!2009 knew more than enough about Many Worlds to know how to exclude it as a consideration. Vladimir_Nesov!2013 probably hasn’t forgotten.
No. It doesn’t exist. Not all uncertainty represents knowledge about quantum events which will have significant macroscopic relevance. Some represents mere ignorance. This ignorance can be about events that are close to deterministic—that means the ‘alternate selves’ have negligible measure and even less decision theoretic relevance. Other uncertainty represents logical uncertainty. That is, where the alternate selves don’t even exist in the trivial irrelevant sense. It was just that the participant didn’t know that “2+2=4” yet.
There may be fewer of those than you realize.
Given that many-worlds is true, yes. Invoking it kind of defeats the purpose of the decision theory problem though, as it is meant as a test of reflective consistency (i.e. you are supposed to assume you prefer $100>$0 in this world regardless of any other worlds).