Sorry about the unclarity then. I probably should have explicitly stated a step by step “marble game procedure”.
My personal suggestion if you want an “anthropic reasoning is confooozing” situation would be the whole anthropic updating vs aumann agreement thing, since the disagreement would seem to be predictable in advance, and everyone involved would appear to be able to be expected to agree that the disagreement is right and proper. (ie, mad scientist sets up a quantum suicide experiment. Test subject survives. Test subject seems to have Bayesian evidence in favor of MWI vs single world, external observer mad scientist who sees the test subject/victim survive would seem to not have any particular new evidence favoring MWI over single world)
(Yes, I know I’ve brought up that subject several times, but it does seem, to me, to be a rather more blatant “something funny is going on here”)
(EDIT: okay, I guess this would count as quantum murder rather than quantum suicide, but you know what I mean.)
I see. I had always thought of the problem as involving 20 (or sometimes 40) different people. The reason for this is that I am an intuitive rather than literal reader, and when Eliezer mentioned stuff about copies of me, I just interpreted this as meaning to emphasize that each person has their own independent ‘subjective reality’. Really only meaning that each person doesn’t share observations with the others.
So all along, I thought this problem was about challenging the soundness of updating on a single independent observation involving yourself as though you are some kind of special reference frame.
… therefore, I don’t think you took this element out, but I’m glad you are resolving the meaning of “anthropic” because there are probably quite a few different “subjective realities” circulating about what the essence of this problem is.
Copies as in “upload your mind. then run 20 copies of the uploaded mind”.
And yes, I know there’s still tricky bits left in the problem, I merely established that those tricky bits didn’t derive from effects like mind copying or quantum suicide or anything like that and could instead show up in ordinary simple stuff, with no need to appeal to anthropic principles to produce the confusion. (sorry if that came out babbly, am getting tired)
Sorry about the unclarity then. I probably should have explicitly stated a step by step “marble game procedure”.
My personal suggestion if you want an “anthropic reasoning is confooozing” situation would be the whole anthropic updating vs aumann agreement thing, since the disagreement would seem to be predictable in advance, and everyone involved would appear to be able to be expected to agree that the disagreement is right and proper. (ie, mad scientist sets up a quantum suicide experiment. Test subject survives. Test subject seems to have Bayesian evidence in favor of MWI vs single world, external observer mad scientist who sees the test subject/victim survive would seem to not have any particular new evidence favoring MWI over single world)
(Yes, I know I’ve brought up that subject several times, but it does seem, to me, to be a rather more blatant “something funny is going on here”)
(EDIT: okay, I guess this would count as quantum murder rather than quantum suicide, but you know what I mean.)
I don’t see how being assigned a green or red room is “anthropic” while being assigned a green or red marble is not anthropic.
I thought the anthropic part came from updating on your own individual experience in the absence of observing what observations others are making.
The difference wasn’t marble vs room but “copies of one being, so number of beings changed” vs “just gather 20 rationalists...”
But my whole point was “the original wasn’t really an anthropic situation, let me construct this alternate yet equivalent version to make that clear”
Do you think that the Sleeping Beauty problem is an anthropic one?
It probably counts as an instance of the general class of problems one would think of as an “anthropic problem”.
I see. I had always thought of the problem as involving 20 (or sometimes 40) different people. The reason for this is that I am an intuitive rather than literal reader, and when Eliezer mentioned stuff about copies of me, I just interpreted this as meaning to emphasize that each person has their own independent ‘subjective reality’. Really only meaning that each person doesn’t share observations with the others.
So all along, I thought this problem was about challenging the soundness of updating on a single independent observation involving yourself as though you are some kind of special reference frame.
… therefore, I don’t think you took this element out, but I’m glad you are resolving the meaning of “anthropic” because there are probably quite a few different “subjective realities” circulating about what the essence of this problem is.
Sorry for delay.
Copies as in “upload your mind. then run 20 copies of the uploaded mind”.
And yes, I know there’s still tricky bits left in the problem, I merely established that those tricky bits didn’t derive from effects like mind copying or quantum suicide or anything like that and could instead show up in ordinary simple stuff, with no need to appeal to anthropic principles to produce the confusion. (sorry if that came out babbly, am getting tired)
That’s funny: when Eliezer said “imagine there are two of you”, etc., I had assumed he meant two of us rationalists, etc.