Still the same. All I can say is I am either the Orignal or the Clone. For the credence of each is still “I don’t know”.
And this number-crunching goes both ways. Say if the Mad scientist is only producing valid Clone in 1% of the experiments. However if he successes, he will produce 1 Million of them. Then what is the probability of me being the Original? I assume people would say close to 0.
This logic could lead to some strange actions such as Brain-Race as described by Adam Elga here. You could force someone to act to your liking by make 100s of his Clones with the same memory. For if he doesn’t do so, you will torture all these clones. Then the best strategy for him is to play ball because he is most likely a clone. However, he could counter that by making 1000s of Clones of himself that will be tortured if they act to your liking. But you could make 100000s of Clones, and he could make 10000000s, etc.
Personally no, I wouldn’t say close to 0 in that situation. While the expected value of number of clones is 10000 and hence the expected value of number of observers is 10001, I can’t think of a measure for which dividing by this quantity results in anything sensible. Generally it is not true that E[1/X] = 1/E[X]. While I have seen plenty of messed up calculations in self-locating probability, I haven’t previously seen that particular one.
Regarding the Dr. Evil in the linked scenario, I believe that the whole scenario is pretty much pointless. Even knowing that they might be a Dupe, any cartoon super-villain like that is going to launch the weapon anyway.
Similarly in your scenario, there are factors outside self-locating credence that will affect behaviour. In a world with such cheap and easy remote duplication technology with no defence, people will develop strategies to deal with it. For example, pre-commitment to not comply with terrorist demands regardless of what is threatened. At any rate, a hostage’s life is almost certainly going to be very short and probably unpleasant regardless of whether the original complies. It’s not like there’s even any point in them actually going to the trouble of torturing thousands of duplicates except to say (with little credibility) “now look what you made me do”.
As I see it this is just dragging in a whole bunch of extra baggage into the scenario, such as personal and variable notions of personal identity, empathy, and/or altruism that are doing nothing but distract from the question at hand:
whether levels of credence in such situations can be assigned numerical values that obey rules of probability.
>Personally no, I wouldn’t say close to 0 in that situation. While the expected value of number of clones is 10000 and hence the expected value of number of observers is 10001, I can’t think of a measure for which dividing by this quantity results in anything sensible.
Wait, are you saying there is no sensible way to assign a value to self-locating probability in this case? Or you are disagreeing with this particular way of assigning a self-locating probability and endorse another method?
You said “I assume people would say close to 0”. I don’t know why you said that. I don’t know how you arrived at that number, or why you would impute it to people in general. The most likely way I could find to arrive at a “close to 0″ number was to make an error that I have seen a few times in the context of students calculating probabilities, but not previously in self-locating probabilities.
How did you arrive at the idea that “people would say close to 0”?
Because the thirder camp is currently the dominating opinion for the Sleeping Beauty Problem. Because Self-Indication Assumption has way more supporters than Self-Sampling Assumption. Self-Indication Assumption treats “I” as a randomly selected observer from all potentially existing observers. Which in this case would give a probability of being the Original close to 0.
I am not saying you have to agree with it. But do you have a method in mind to arrive at a different probability? If so what is the method? Or do you think there is no sensible probability value for this case?
P(mad scientist created no valid clones) = 0.99 as given in problem description, P(me being the original | no clones exist) = 1, therefore P(me being the original & no clones exist) = 0.99.
P(mad scientist created 10000 clones) = 0.01, P(me being the original | 10000 clones) ~= 0.00009999. Therefore P(me being the original & 10000 clones exist) ~= 0.0000009999.
P(me being the original) = P(me being the original & no clones exist) + P(me being the original & 10000 clones exist) ~= 0.9900009999 as these are disjoint exhaustive events.
You just stated Self-Sampling Assumption’s calculation.
Given you said “The most likely way I could find to arrive at a “close to 0″ number was to make an error that I have seen a few times in the context of students calculating probabilities, but not previously in self-locating probabilities.” about Self-Indication Assumption’s method.
Are you endorsing SSA over SIA? Or you are just listing the different camps in anthropic paradoxes?
No, I just forgot about the exact statement of Bostrom’s original SIA. It doesn’t apply in this case anyway, since it’s only applied other things being equal, and here they aren’t equal.
Still the same. All I can say is I am either the Orignal or the Clone. For the credence of each is still “I don’t know”.
And this number-crunching goes both ways. Say if the Mad scientist is only producing valid Clone in 1% of the experiments. However if he successes, he will produce 1 Million of them. Then what is the probability of me being the Original? I assume people would say close to 0.
This logic could lead to some strange actions such as Brain-Race as described by Adam Elga here. You could force someone to act to your liking by make 100s of his Clones with the same memory. For if he doesn’t do so, you will torture all these clones. Then the best strategy for him is to play ball because he is most likely a clone. However, he could counter that by making 1000s of Clones of himself that will be tortured if they act to your liking. But you could make 100000s of Clones, and he could make 10000000s, etc.
Personally no, I wouldn’t say close to 0 in that situation. While the expected value of number of clones is 10000 and hence the expected value of number of observers is 10001, I can’t think of a measure for which dividing by this quantity results in anything sensible. Generally it is not true that E[1/X] = 1/E[X]. While I have seen plenty of messed up calculations in self-locating probability, I haven’t previously seen that particular one.
Regarding the Dr. Evil in the linked scenario, I believe that the whole scenario is pretty much pointless. Even knowing that they might be a Dupe, any cartoon super-villain like that is going to launch the weapon anyway.
Similarly in your scenario, there are factors outside self-locating credence that will affect behaviour. In a world with such cheap and easy remote duplication technology with no defence, people will develop strategies to deal with it. For example, pre-commitment to not comply with terrorist demands regardless of what is threatened. At any rate, a hostage’s life is almost certainly going to be very short and probably unpleasant regardless of whether the original complies. It’s not like there’s even any point in them actually going to the trouble of torturing thousands of duplicates except to say (with little credibility) “now look what you made me do”.
As I see it this is just dragging in a whole bunch of extra baggage into the scenario, such as personal and variable notions of personal identity, empathy, and/or altruism that are doing nothing but distract from the question at hand:
whether levels of credence in such situations can be assigned numerical values that obey rules of probability.
>Personally no, I wouldn’t say close to 0 in that situation. While the expected value of number of clones is 10000 and hence the expected value of number of observers is 10001, I can’t think of a measure for which dividing by this quantity results in anything sensible.
Wait, are you saying there is no sensible way to assign a value to self-locating probability in this case? Or you are disagreeing with this particular way of assigning a self-locating probability and endorse another method?
You said “I assume people would say close to 0”. I don’t know why you said that. I don’t know how you arrived at that number, or why you would impute it to people in general. The most likely way I could find to arrive at a “close to 0″ number was to make an error that I have seen a few times in the context of students calculating probabilities, but not previously in self-locating probabilities.
How did you arrive at the idea that “people would say close to 0”?
Because the thirder camp is currently the dominating opinion for the Sleeping Beauty Problem. Because Self-Indication Assumption has way more supporters than Self-Sampling Assumption. Self-Indication Assumption treats “I” as a randomly selected observer from all potentially existing observers. Which in this case would give a probability of being the Original close to 0.
I am not saying you have to agree with it. But do you have a method in mind to arrive at a different probability? If so what is the method? Or do you think there is no sensible probability value for this case?
One possible derivation:
P(mad scientist created no valid clones) = 0.99 as given in problem description, P(me being the original | no clones exist) = 1, therefore P(me being the original & no clones exist) = 0.99.
P(mad scientist created 10000 clones) = 0.01, P(me being the original | 10000 clones) ~= 0.00009999. Therefore P(me being the original & 10000 clones exist) ~= 0.0000009999.
P(me being the original) = P(me being the original & no clones exist) + P(me being the original & 10000 clones exist) ~= 0.9900009999 as these are disjoint exhaustive events.
0.9900009999 is not “close to 0”.
You just stated Self-Sampling Assumption’s calculation.
Given you said “The most likely way I could find to arrive at a “close to 0″ number was to make an error that I have seen a few times in the context of students calculating probabilities, but not previously in self-locating probabilities.” about Self-Indication Assumption’s method.
Are you endorsing SSA over SIA? Or you are just listing the different camps in anthropic paradoxes?
No, I just forgot about the exact statement of Bostrom’s original SIA. It doesn’t apply in this case anyway, since it’s only applied other things being equal, and here they aren’t equal.