Why do I think anthropic reasoning and consciousness are related?
In a nutshell, I think subjective anticipation requires subjectivity. We humans feel dissatisfied with a description like “well, one system running a continuation of the computation in your brain ends up in a red room and two such systems end up in green rooms” because we feel that there’s this extra “me” thing, whose future we need to account for. We bother to ask how the “me” gets split up, what “I” should anticipate, because we feel that there’s “something it’s like to be me”, and that (unless we die) there will be in future “something it will be like to be me”. I suspect that the things I said in the previous sentence are at best confused and at worst nonsense. But the question of why people intuit crazy things like that is the philosophical question we label “consciousness”.
However, the feeling that there will be in future “something it will be like to be me”, and in particular that there will be one “something it will be like to be me” if taken seriously, forces us to have subjective anticipation, that is, to write probability distribution summing to one for which copy we end up as. Once you do that, if you wake up in a green room in Eliezer’s example, you are forced to update to 90% probability that the coin came up heads (provided you distributed your subjective anticipation evenly between all twenty copies in both the head and tail scenarios, which really seems like the only sane thing to do.)
Or, at least, the same amount of “something it is like to be me”-ness as we started with, in some ill-defined sense.
On the other hand, if you do not feel that there is any fact of the matter as to which copy you become, then you just want all your copies to execute whatever strategy is most likely to get all of them the most money from your initial perspective of ignorance of the coinflip.
Incidentally, the optimal strategy looks like an policy selected by updateless decision theory and not like any probability of the the coin having been heads or tails. PlaidX beat me to the counter-example for p=50%. Counter-examples of like PlaidX’s will work for any p<90%, and counter-examples like Eliezer’s will work for any p>50%, so that pretty much covers it. So, unless we want to include ugly hacks like responsibility, or unless we let the copies reason Goldenly (using Eliezer’s original TDT) about each other’s actions as tranposed versions of their own actions (which does correctly handle PlaidX’s counter-example, but might break in more complicated cases where no isomorphism is apparent) there simply isn’t a probability-of-heads that represents the right thing for the copies to do no matter the deal offered to them.
Why do I think anthropic reasoning and consciousness are related?
In a nutshell, I think subjective anticipation requires subjectivity. We humans feel dissatisfied with a description like “well, one system running a continuation of the computation in your brain ends up in a red room and two such systems end up in green rooms” because we feel that there’s this extra “me” thing, whose future we need to account for. We bother to ask how the “me” gets split up, what “I” should anticipate, because we feel that there’s “something it’s like to be me”, and that (unless we die) there will be in future “something it will be like to be me”. I suspect that the things I said in the previous sentence are at best confused and at worst nonsense. But the question of why people intuit crazy things like that is the philosophical question we label “consciousness”.
However, the feeling that there will be in future “something it will be like to be me”, and in particular that there will be one “something it will be like to be me” if taken seriously, forces us to have subjective anticipation, that is, to write probability distribution summing to one for which copy we end up as. Once you do that, if you wake up in a green room in Eliezer’s example, you are forced to update to 90% probability that the coin came up heads (provided you distributed your subjective anticipation evenly between all twenty copies in both the head and tail scenarios, which really seems like the only sane thing to do.)
Or, at least, the same amount of “something it is like to be me”-ness as we started with, in some ill-defined sense.
On the other hand, if you do not feel that there is any fact of the matter as to which copy you become, then you just want all your copies to execute whatever strategy is most likely to get all of them the most money from your initial perspective of ignorance of the coinflip.
Incidentally, the optimal strategy looks like an policy selected by updateless decision theory and not like any probability of the the coin having been heads or tails. PlaidX beat me to the counter-example for p=50%. Counter-examples of like PlaidX’s will work for any p<90%, and counter-examples like Eliezer’s will work for any p>50%, so that pretty much covers it. So, unless we want to include ugly hacks like responsibility, or unless we let the copies reason Goldenly (using Eliezer’s original TDT) about each other’s actions as tranposed versions of their own actions (which does correctly handle PlaidX’s counter-example, but might break in more complicated cases where no isomorphism is apparent) there simply isn’t a probability-of-heads that represents the right thing for the copies to do no matter the deal offered to them.