Hmm. It seems to me that I update just fine. If I flip a quantum coin and it comes up heads, and afterwards I face a decision problem whose outcome depends on that coinflip, then UDT prescribes behavior that looks like I had updated.
Anyway, if my way of updating is wrong, then what way is right?
I’m sympathetic to that approach, but most of the folk supporting or opposing the SSA (or SIA) Doomsday Argument aren’t. From the title of your post I thought you were trying to understand what people were talking about. The Doomsday argument comes from updating principles like SSA, as discussed here.
I am trying to understand what people are talking about, precisely, and I asked on LW because people here are more likely to have a precise understanding of the DA than most philosophers.
If my original example takes place in a Big World (e.g. the total population depends on a quantum event that happened long ago), then it seems to me that the SSA doesn’t make the DA go through. Let’s say an urn contains 1 red ball, 1000 yellow balls and 1000000 green balls. Balls of each color are numbered. You draw a ball at random and see that it says “50”, but you’re colorblind and cannot see the color. Then Bayes says you should assign 0 probability to red, 0.5 to yellow, and 0.5 to green, thus the relative probabilities of “worlds compatible with your existence” are unchanged.
So I’m still confused. Does the updating rule used by the DA rely on a fundamental difference between big worlds and small worlds? This looks suspicious, because human decisions shouldn’t change depending on classicalness or quantumness of a coinflip, yet the SSA seems to say they should, by arbitrarily delineating parts of reality as “worlds”. There’s got to be a mistake somewhere.
The implied algorithm is that you first pick a world size s from some distribution, and then pick an index uniformly from 1..s. This corresponds to the case where there are three separate urns with 1 red, 1000 yellow and 10^6 green balls, and you pick from one of the urns without knowing which one it is.
(I find the second part, picking an index uniformly from 1..s, questionable; but there’s only one sample of evidence with which to determine what the right distribution would be, so there’s little point in speculating on it.)
Let’s say an urn contains 1 red ball, 1000 yellow balls and 1000000 green balls. Balls of each color are numbered.
This is not equivalent to the original problem. In the original problem, if there are 1000 people you have a 1/1000 chance of being the 50th and if there are 1,000,000 people you have a 1⁄1,000,000 chance of being the 50th. In your formulation, you have a 1⁄1,001,001 chance of getting each of the balls marked ’50′.
It might be equivalent to have the urn contain one million red balls marked ‘1’, one million yellow balls divided into one thousand sets which are each numbered one through one thousand, and one million green balls numbered one through one million. In this case, if you draw a ball marked ’50′, it can be either the one green ball that’s marked ’50′, or any of the thousand yellow balls marked ‘50’, and the latter case is one thousand times more likely than the former.
Thanks. Carl, jimrandomh and you have helped me understand what the original formulation says about probabilities, but I still can’t understand why it says that. My grandparent comment and its sibling can be interpreted as arguments against the original formulation, what do you think about them?
In general I’m a lousy one to ask about probability; I only noticed this particular thing after a few days of contemplation. I was more hoping that someone else would see it and be able to use it to form a more coherent explanation.
I do think, regarding the sibling, that creating or destroying people is incompatible with assuming that a certain number of people will exist—I expect that a hypothesis that would generate that prediction would have an implicit assumption that nobody is going to be creating or destroying or failing to create people on the basis of the existence of the hypothesis. In other words, causation doesn’t work like that.
Edit: It might help to note that the original point that led me to notice that your formulation was flawed was that the different worlds—represented by the different colors—were not equally likely. If you pick a ball out of your urn and don’t look at the number, it’s much more likely to be green than yellow and very very unlikely to be red. If you pick a ball out of my urn, there’s an even chance of it being any of the three colors.
I thought about SSA some more and came up with a funny scenario. Imagine the world contains only one person and his name is Bob. At a specified time Omega will or won’t create 100 additional people depending on a coinflip, none of whom will be named Bob.
Case 1: Bob knows that he’s Bob before the coinflip. In this case we can all agree that Bob can get no information about the coinflip’s outcome.
Case 2: Bob takes an amnesia drug, goes to sleep, the coinflip happens and people are possibly created, Bob wakes up thinking he might be one of them, then takes a memory restoration drug. In this case SSA leads him to conclude that additional people probably didn’t get created, even though he has the same information as in case 1.
Case 3: coinflip happens, Bob takes amnesia drug, then immediately takes memory restoration drug. SSA says this operation isn’t neutral and Bob should switch from case 1 to case 2. Moreover, Bob can anticipate changing his beliefs this way, but that doesn’t affect his current beliefs. Haha.
Bonus: what if Omega is spacelike separated from Bob?
The only way to rescue SSA is to bite the bullet in case 1 and say that Bob’s prior beliefs about the coinflip’s outcome are not 50⁄50; they are “shifted” by the fact that the coinflip can create additional people. So SSA allows Bob to predict with high confidence the outcome of a fair coinflip, which sounds very weird (though it can still be right). Note that using UDT or “big-world SSA” as in my other comment will lead to more obvious and “normal” answers.
ETA: my scenario suggests a hilarious way to test SSA experimentally. If many people use coinflips to decide whether to have kids, and SSA is true, then the results will be biased toward “don’t have kids” because the doomsday wants to happen sooner and pushes probabilities accordingly :-)
ETA2: or you could kill or spare babies depending on coinflips, thus biasing the coins toward “kill”. The more babies you kill, the stronger the bias.
ETA3: or you could win the lottery by precommitting to create many observers if you lose. All these scenarios make SSA and the DA look pretty bad.
Hmm. It seems to me that I update just fine. If I flip a quantum coin and it comes up heads, and afterwards I face a decision problem whose outcome depends on that coinflip, then UDT prescribes behavior that looks like I had updated.
Anyway, if my way of updating is wrong, then what way is right?
I’m sympathetic to that approach, but most of the folk supporting or opposing the SSA (or SIA) Doomsday Argument aren’t. From the title of your post I thought you were trying to understand what people were talking about. The Doomsday argument comes from updating principles like SSA, as discussed here.
I am trying to understand what people are talking about, precisely, and I asked on LW because people here are more likely to have a precise understanding of the DA than most philosophers.
If my original example takes place in a Big World (e.g. the total population depends on a quantum event that happened long ago), then it seems to me that the SSA doesn’t make the DA go through. Let’s say an urn contains 1 red ball, 1000 yellow balls and 1000000 green balls. Balls of each color are numbered. You draw a ball at random and see that it says “50”, but you’re colorblind and cannot see the color. Then Bayes says you should assign 0 probability to red, 0.5 to yellow, and 0.5 to green, thus the relative probabilities of “worlds compatible with your existence” are unchanged.
So I’m still confused. Does the updating rule used by the DA rely on a fundamental difference between big worlds and small worlds? This looks suspicious, because human decisions shouldn’t change depending on classicalness or quantumness of a coinflip, yet the SSA seems to say they should, by arbitrarily delineating parts of reality as “worlds”. There’s got to be a mistake somewhere.
The implied algorithm is that you first pick a world size s from some distribution, and then pick an index uniformly from 1..s. This corresponds to the case where there are three separate urns with 1 red, 1000 yellow and 10^6 green balls, and you pick from one of the urns without knowing which one it is.
(I find the second part, picking an index uniformly from 1..s, questionable; but there’s only one sample of evidence with which to determine what the right distribution would be, so there’s little point in speculating on it.)
This is not equivalent to the original problem. In the original problem, if there are 1000 people you have a 1/1000 chance of being the 50th and if there are 1,000,000 people you have a 1⁄1,000,000 chance of being the 50th. In your formulation, you have a 1⁄1,001,001 chance of getting each of the balls marked ’50′.
It might be equivalent to have the urn contain one million red balls marked ‘1’, one million yellow balls divided into one thousand sets which are each numbered one through one thousand, and one million green balls numbered one through one million. In this case, if you draw a ball marked ’50′, it can be either the one green ball that’s marked ’50′, or any of the thousand yellow balls marked ‘50’, and the latter case is one thousand times more likely than the former.
Thanks. Carl, jimrandomh and you have helped me understand what the original formulation says about probabilities, but I still can’t understand why it says that. My grandparent comment and its sibling can be interpreted as arguments against the original formulation, what do you think about them?
In general I’m a lousy one to ask about probability; I only noticed this particular thing after a few days of contemplation. I was more hoping that someone else would see it and be able to use it to form a more coherent explanation.
I do think, regarding the sibling, that creating or destroying people is incompatible with assuming that a certain number of people will exist—I expect that a hypothesis that would generate that prediction would have an implicit assumption that nobody is going to be creating or destroying or failing to create people on the basis of the existence of the hypothesis. In other words, causation doesn’t work like that.
Edit: It might help to note that the original point that led me to notice that your formulation was flawed was that the different worlds—represented by the different colors—were not equally likely. If you pick a ball out of your urn and don’t look at the number, it’s much more likely to be green than yellow and very very unlikely to be red. If you pick a ball out of my urn, there’s an even chance of it being any of the three colors.
I thought about SSA some more and came up with a funny scenario. Imagine the world contains only one person and his name is Bob. At a specified time Omega will or won’t create 100 additional people depending on a coinflip, none of whom will be named Bob.
Case 1: Bob knows that he’s Bob before the coinflip. In this case we can all agree that Bob can get no information about the coinflip’s outcome.
Case 2: Bob takes an amnesia drug, goes to sleep, the coinflip happens and people are possibly created, Bob wakes up thinking he might be one of them, then takes a memory restoration drug. In this case SSA leads him to conclude that additional people probably didn’t get created, even though he has the same information as in case 1.
Case 3: coinflip happens, Bob takes amnesia drug, then immediately takes memory restoration drug. SSA says this operation isn’t neutral and Bob should switch from case 1 to case 2. Moreover, Bob can anticipate changing his beliefs this way, but that doesn’t affect his current beliefs. Haha.
Bonus: what if Omega is spacelike separated from Bob?
The only way to rescue SSA is to bite the bullet in case 1 and say that Bob’s prior beliefs about the coinflip’s outcome are not 50⁄50; they are “shifted” by the fact that the coinflip can create additional people. So SSA allows Bob to predict with high confidence the outcome of a fair coinflip, which sounds very weird (though it can still be right). Note that using UDT or “big-world SSA” as in my other comment will lead to more obvious and “normal” answers.
ETA: my scenario suggests a hilarious way to test SSA experimentally. If many people use coinflips to decide whether to have kids, and SSA is true, then the results will be biased toward “don’t have kids” because the doomsday wants to happen sooner and pushes probabilities accordingly :-)
ETA2: or you could kill or spare babies depending on coinflips, thus biasing the coins toward “kill”. The more babies you kill, the stronger the bias.
ETA3: or you could win the lottery by precommitting to create many observers if you lose. All these scenarios make SSA and the DA look pretty bad.