What’s “expected utility” in situations with indexical uncertainty? If you take the “expectation” according to an equal weighting of all indistinguishable observer-moments, isn’t your reasoning circular?
Also I’m interested in hearing your response to rwallace’s scenario, which seems to show that assigning equal probabilities to indistinguishable observer-moments leads to time-inconsistency.
Ah, wait: the probability in rwallace’s post was 1⁄99 all along. This is because each time you do the “clone me” operation, it’s not like it automatically halves the probability; it only works that way for a very certain set of evidence. When you are careful and specify which evidence is available, the apparent problem is resolved.
Hmm, actually I might be making the wrong correction since this would contradict the rule that P(AB) = P(A)*P(B|A). But my plan was to specify the “anthropic evidence” (memories, body, etc.) as being exactly the same stuff that makes you “you” at the start of the process, then clarifying the question as P(original | anthropic evidence).
Upon reflection, this is very shaky, but still possibly correct. I’ll try and formalize the change to the product rule and see what it says about Sleeping Beauty, which has a relatively known answer.
Oh, sorry; the calculations are trivial. It’s the parts that aren’t math that are the problem.
Take a person imagining whether to get copied 98 times. He wants to know “after I get out of the machine but before I get told if I’m the original, what is the probability that I am the orginial?” There are two different “edge cases.”
1) If during the copying process you generate no new evidence, i.e. all your copies get near-identical memories- if that’s the case, then the principle of indifference applies with overwhelming obviousness. You have nothing with which to differentiate yourself, so you must assign an equal probability, so the probability that you’re the original is 1⁄99.
2) You have to go in in 98 different sessions to get copied, thus generating extra evidence (memories) in between each time. Here, the only place you can apply the principle of indifference is within each session, so you have a probability of 1⁄2 of being the original after every session. The product rule then says that since P(AB) = P(A)*P(B|A), your final probability of being the original is (1/2)^98. But this feels odd because at no time during the process could this probability be realized—when waking up you always have a probability estimate of 1⁄2 - rendering this immune from the sort of betting game you might play in e.g. the Sleeping Beauty problem.
With these two cases in mind, we can consider a copying scheme with the causal structure of (2) but the evidence of (1) (copied in series, but no distinguishing memories). Logic would say that the evidence wins, since the evidence usually wins. But either the product rule stops working or the probability P(B|A) changes in an unusual way that must exactly parallel what the evidence requires—this provides a license for just using the evidence and not using the product rule in these sorts of cases, but it would be interesting to see whether it’s possible to save the product rule.
Saving the product rule would require changing the P(B|A) in P(AB)=P(A)*P(B|A). If we just look at the case where you’re copied 2 times in a row with identical memories, A = original the first time and B = original the second time. One woudl expect that P(A) = 1⁄2. But P(AB)=1/3, so P(B|A) must equal 3⁄4. For 98 copies the last conditional probability would be 97/98! This is really weird. So I’m still thinking about it.
Hey, I just had this idea. To get the required values of P(B|A) you may try to consider the possibility that the “subjective continuation” of a copy (which was made from the original) can jump into another copy (also made from the original, not from the first copy). There seems to be no apriori reason why that shouldn’t happen, if the information states of the copies are equivalent. Why focus on the physical continuity within the copying process anyway? Information is all that matters.
This way you get to keep the illusion that you “are” one specific copy at all times, rather the set of all identical information states at once (my preferred point of view up till now). I wonder if some other thought experiment could break that illusion more conclusively.
Hm, I guess it is circular. Dang. The question is really “is bayesian probability correct in general?”
Do you mean rwallace’s scenario with the copies? The probabilities seem correct, though since there are multiple copies, normalization might be broken (or artificially enforced) somewhere I didn’t notice, like in Sleeping Beauty. I’m a bit unsure. What is clear is that there isn’t actually a discontinuity at short times—since the probability comes form the evidence of your memories, not how it “really happened.”
EDIT: There does appear to be an inconsistency when making serial copies—computing it different ways gives different answers. Freaking normalization.
We’re still talking past each other, I’m afraid.
What’s “expected utility” in situations with indexical uncertainty? If you take the “expectation” according to an equal weighting of all indistinguishable observer-moments, isn’t your reasoning circular?
Also I’m interested in hearing your response to rwallace’s scenario, which seems to show that assigning equal probabilities to indistinguishable observer-moments leads to time-inconsistency.
Ah, wait: the probability in rwallace’s post was 1⁄99 all along. This is because each time you do the “clone me” operation, it’s not like it automatically halves the probability; it only works that way for a very certain set of evidence. When you are careful and specify which evidence is available, the apparent problem is resolved.
I’m afraid I can’t take your word for that, please show me the calculations.
Hmm, actually I might be making the wrong correction since this would contradict the rule that P(AB) = P(A)*P(B|A). But my plan was to specify the “anthropic evidence” (memories, body, etc.) as being exactly the same stuff that makes you “you” at the start of the process, then clarifying the question as P(original | anthropic evidence).
Upon reflection, this is very shaky, but still possibly correct. I’ll try and formalize the change to the product rule and see what it says about Sleeping Beauty, which has a relatively known answer.
This still doesn’t look like the calculations that I asked for… ?
Oh, sorry; the calculations are trivial. It’s the parts that aren’t math that are the problem.
Take a person imagining whether to get copied 98 times. He wants to know “after I get out of the machine but before I get told if I’m the original, what is the probability that I am the orginial?” There are two different “edge cases.”
1) If during the copying process you generate no new evidence, i.e. all your copies get near-identical memories- if that’s the case, then the principle of indifference applies with overwhelming obviousness. You have nothing with which to differentiate yourself, so you must assign an equal probability, so the probability that you’re the original is 1⁄99.
2) You have to go in in 98 different sessions to get copied, thus generating extra evidence (memories) in between each time. Here, the only place you can apply the principle of indifference is within each session, so you have a probability of 1⁄2 of being the original after every session. The product rule then says that since P(AB) = P(A)*P(B|A), your final probability of being the original is (1/2)^98. But this feels odd because at no time during the process could this probability be realized—when waking up you always have a probability estimate of 1⁄2 - rendering this immune from the sort of betting game you might play in e.g. the Sleeping Beauty problem.
With these two cases in mind, we can consider a copying scheme with the causal structure of (2) but the evidence of (1) (copied in series, but no distinguishing memories). Logic would say that the evidence wins, since the evidence usually wins. But either the product rule stops working or the probability P(B|A) changes in an unusual way that must exactly parallel what the evidence requires—this provides a license for just using the evidence and not using the product rule in these sorts of cases, but it would be interesting to see whether it’s possible to save the product rule.
Saving the product rule would require changing the P(B|A) in P(AB)=P(A)*P(B|A). If we just look at the case where you’re copied 2 times in a row with identical memories, A = original the first time and B = original the second time. One woudl expect that P(A) = 1⁄2. But P(AB)=1/3, so P(B|A) must equal 3⁄4. For 98 copies the last conditional probability would be 97/98! This is really weird. So I’m still thinking about it.
Hey, I just had this idea. To get the required values of P(B|A) you may try to consider the possibility that the “subjective continuation” of a copy (which was made from the original) can jump into another copy (also made from the original, not from the first copy). There seems to be no apriori reason why that shouldn’t happen, if the information states of the copies are equivalent. Why focus on the physical continuity within the copying process anyway? Information is all that matters.
This way you get to keep the illusion that you “are” one specific copy at all times, rather the set of all identical information states at once (my preferred point of view up till now). I wonder if some other thought experiment could break that illusion more conclusively.
Hm, I guess it is circular. Dang. The question is really “is bayesian probability correct in general?”
Do you mean rwallace’s scenario with the copies? The probabilities seem correct, though since there are multiple copies, normalization might be broken (or artificially enforced) somewhere I didn’t notice, like in Sleeping Beauty. I’m a bit unsure. What is clear is that there isn’t actually a discontinuity at short times—since the probability comes form the evidence of your memories, not how it “really happened.”
EDIT: There does appear to be an inconsistency when making serial copies—computing it different ways gives different answers. Freaking normalization.