If you’re the only person in the world right now, and Omega is about to flip a fair coin and create 100 people in case of heads, then SSA tells you to be 99% sure of tails, while SIA says 50⁄50. There’s just no way SSA is right on this one.
If the program has already generated one problem and added it to P, and then generates 1 or 0 randomly for W and adds 100W problems to P—which is basically the same as my first model, and should be equivalent to SSA—then I should expect a 50% chance of having 1 problem in P and a 50% chance of having 101 problems in P, and also a 50% chance of W=1.
If it does the above, and then generates a random number X between 1 and 101, and only presents me with a problem if there’s a problem numbered X, and I get shown a problem, I should predict a ~99% chance that W=1. I think this is mathematically equivalent to SIA. (It is if my second formulation in the OP is, which I think is true even if it’s rather round-about.)
If the program has already generated one problem and added it to P, and then generates 1 or 0 randomly for W and adds 100W problems to P—which is basically the same as my first model, and should be equivalent to SSA—then I should expect a 50% chance of having 1 problem in P and a 50% chance of having 101 problems in P, and also a 50% chance of W=1.
Yeah, that’s what SSA says you should expect before updating :-) In my example you already know that you’re the first person, but don’t know if the other 100 will be created or not. In your terms this is equivalent to updating on the fact that you have received math problem number 1, which gives you high confidence that the fair coinflip in the future will come out a certain way.
And after updating, as well. The first math problem tells you basically nothing, since it happens regardless of the result of the coin flip/generated random number.
Ignore the labels for a minute. Say I have a box, and I tell you that I flipped a coin earlier and put one rock in the box if it was heads and two rocks in the box if it was tails. I then take a rock out of the box. What’s the chance that the box is now empty? How about if I put three rocks in for tails instead of two?
I refuse to ignore the labels! :-) Drawing the first math problem tells me a lot, because it’s much more likely in a world with 1 math problem than in a world with 101 math problems. That’s the whole point. It’s not equivalent to drawing a math problem and refusing to look at the label.
Let’s return to the original formulation in your post. I claim that being shown P(1) makes W=0 much more likely than W=1. Do you agree?
If I know that it’s P(1), and I know that it was randomly selected from all the generated problems (rather than being shown to me because it’s the first one), then yes.
If I’m shown a single randomly selected problem from the list of generated problems without being told which problem number it is, it doesn’t make W=0 more likely than W=1 or W=2.
If the program has already generated one problem and added it to P, and then generates 1 or 0 randomly for W and adds 100W problems to P—which is basically the same as my first model, and should be equivalent to SSA—then I should expect a 50% chance of having 1 problem in P and a 50% chance of having 101 problems in P, and also a 50% chance of W=1.
If it does the above, and then generates a random number X between 1 and 101, and only presents me with a problem if there’s a problem numbered X, and I get shown a problem, I should predict a ~99% chance that W=1. I think this is mathematically equivalent to SIA. (It is if my second formulation in the OP is, which I think is true even if it’s rather round-about.)
Yeah, that’s what SSA says you should expect before updating :-) In my example you already know that you’re the first person, but don’t know if the other 100 will be created or not. In your terms this is equivalent to updating on the fact that you have received math problem number 1, which gives you high confidence that the fair coinflip in the future will come out a certain way.
And after updating, as well. The first math problem tells you basically nothing, since it happens regardless of the result of the coin flip/generated random number.
Ignore the labels for a minute. Say I have a box, and I tell you that I flipped a coin earlier and put one rock in the box if it was heads and two rocks in the box if it was tails. I then take a rock out of the box. What’s the chance that the box is now empty? How about if I put three rocks in for tails instead of two?
I refuse to ignore the labels! :-) Drawing the first math problem tells me a lot, because it’s much more likely in a world with 1 math problem than in a world with 101 math problems. That’s the whole point. It’s not equivalent to drawing a math problem and refusing to look at the label.
Let’s return to the original formulation in your post. I claim that being shown P(1) makes W=0 much more likely than W=1. Do you agree?
If I know that it’s P(1), and I know that it was randomly selected from all the generated problems (rather than being shown to me because it’s the first one), then yes.
If I’m shown a single randomly selected problem from the list of generated problems without being told which problem number it is, it doesn’t make W=0 more likely than W=1 or W=2.