Taking a bet is not the same as determining a probability if your utility function changes in some cases (e.g. if you are altruistic in some cases but not others). Precommitting to odds that are not the same as the probability is consistent with SIA in these cases.
How exactly does it destroy that argument? It does look like this post is arguing about the question of what odds you should bet at, not about the question of what you think is likely the case. These are not exactly the same thing. I would be willing to bet any amount, at any odds, that the world will still exist 10 years from now, or 1000 years from now, but that doesn’t mean that I am confident that it will. It simply means I know I can’t lose that bet, since if the world doesn’t exist, neither will I or the person I am betting with.
(I agree that the other post was mistaken, and I think it went from a 99% probability in A, B, and C, to a 50% probability in the remaining scenarios.)
Ok. I watched the video. I still disagree with that, and I don’t think it’s arbitrary to prefer SSA to SIA. I think that follows necessarily from the consideration that you could not have noticed yourself not existing.
In any case, whatever you say about probability, being surprised is something that happens in real life. And if someone did the Sleeping Beauty experiment on me in real life, but so that the difference was between 1⁄100,000 and 1⁄2, and then asked me if I thought the coin was heads or tails, I would say I didn’t know. And then if they told me it was heads, I would not be surprised. That shows that I agree with the halfer reasoning and disagree with the thirder reasoning.
Whether or not it makes sense to put numbers on it, either you’re going to be surprised at the result or not. And I would apply that to basically every case of SSA argument, including the Doomsday argument; I would be very surprised if 1,000,000 years from now humanity has spread all over the universe.
In any case, whatever you say about probability, being surprised is something that happens in real life.
As someone who actually experienced in real life how it feels to awake from artifical coma having multiple days without memory in the past, I think your naive intuition about what would surprise has no basis.
Being surprised happens at system I level and system I has no notion of having been in an artificial coma.
If system I has no notion of being in an artificial coma, then there is no chance I would be surprised by either heads or tails, which supports my point.
No, system I considers it’s model of the world that the time passed was just the time of a normal sleep between two days. Anything that deviates from that is highly surprising.
Taking a bet is not the same as determining a probability if your utility function changes in some cases (e.g. if you are altruistic in some cases but not others). Precommitting to odds that are not the same as the probability is consistent with SIA in these cases.
This post doesn’t destroy SIA. It just destroys an argument that I found was the strongest one in favour of it.
Huh. I’ve always favored the principle of indifference (that equal information states should have equal probability) myself.
How exactly does it destroy that argument? It does look like this post is arguing about the question of what odds you should bet at, not about the question of what you think is likely the case. These are not exactly the same thing. I would be willing to bet any amount, at any odds, that the world will still exist 10 years from now, or 1000 years from now, but that doesn’t mean that I am confident that it will. It simply means I know I can’t lose that bet, since if the world doesn’t exist, neither will I or the person I am betting with.
(I agree that the other post was mistaken, and I think it went from a 99% probability in A, B, and C, to a 50% probability in the remaining scenarios.)
I think my old post here has the core of the argument: http://lesswrong.com/lw/18r/avoiding_doomsday_a_proof_of_the_selfindication/14vy
But I no longer consider anthropic probabilities to have any meaning at all; see for instance https://www.youtube.com/watch?v=aiGOGkBiWEo
Ok. I watched the video. I still disagree with that, and I don’t think it’s arbitrary to prefer SSA to SIA. I think that follows necessarily from the consideration that you could not have noticed yourself not existing.
In any case, whatever you say about probability, being surprised is something that happens in real life. And if someone did the Sleeping Beauty experiment on me in real life, but so that the difference was between 1⁄100,000 and 1⁄2, and then asked me if I thought the coin was heads or tails, I would say I didn’t know. And then if they told me it was heads, I would not be surprised. That shows that I agree with the halfer reasoning and disagree with the thirder reasoning.
Whether or not it makes sense to put numbers on it, either you’re going to be surprised at the result or not. And I would apply that to basically every case of SSA argument, including the Doomsday argument; I would be very surprised if 1,000,000 years from now humanity has spread all over the universe.
As someone who actually experienced in real life how it feels to awake from artifical coma having multiple days without memory in the past, I think your naive intuition about what would surprise has no basis.
Being surprised happens at system I level and system I has no notion of having been in an artificial coma.
If system I has no notion of being in an artificial coma, then there is no chance I would be surprised by either heads or tails, which supports my point.
No, system I considers it’s model of the world that the time passed was just the time of a normal sleep between two days. Anything that deviates from that is highly surprising.
Yes, but if we have SB problems all over the place and were commonly exposed to them, what would our sense of surprise evolve to?