Again, if we randomly selected someone to ask, rather than having specified in advance that we’re going to make the decision depend on the unanimous response of all people in green rooms, then there would be no paradox. What you’re talking about here, pulling out a random marble, is the equivalent of asking a random single person from either green or red rooms. But this is not what we’re doing!
Either I’m misunderstanding something, or I wasn’t clear.
To make it explicit: EVERYONE who gets a green marble gets asked, and the outcome depends their consent being unanimous, just like everyone who wakes up in a green room gets asked. ie, all twenty rationalists draw a marble from the bucket, so that by the end, the bucket is empty.
Everyone who got a green marble gets asked for their decision, and the final outcome depends on all the answers. The bit about them drawing marbles individually is just to keep them from seeing what marbles the others got or being able to talk to each other once the marble drawing starts.
Unless I completely failed to comprehend some aspect of what’s going on here, this is effectively equivalent to the problem you described.
Oh, okay, that wasn’t clear actually. (Because I’m used to “they” being a genderless singular pronoun.) In that case these problems do indeed look equivalent.
Hm. Hm hm hm. I shall have to think about this. It is a an extremely good point. The more so as anyone who draws a green marble should indeed be assigning a 90% probability to there being a mostly-green bucket.
Sorry about the unclarity then. I probably should have explicitly stated a step by step “marble game procedure”.
My personal suggestion if you want an “anthropic reasoning is confooozing” situation would be the whole anthropic updating vs aumann agreement thing, since the disagreement would seem to be predictable in advance, and everyone involved would appear to be able to be expected to agree that the disagreement is right and proper. (ie, mad scientist sets up a quantum suicide experiment. Test subject survives. Test subject seems to have Bayesian evidence in favor of MWI vs single world, external observer mad scientist who sees the test subject/victim survive would seem to not have any particular new evidence favoring MWI over single world)
(Yes, I know I’ve brought up that subject several times, but it does seem, to me, to be a rather more blatant “something funny is going on here”)
(EDIT: okay, I guess this would count as quantum murder rather than quantum suicide, but you know what I mean.)
I see. I had always thought of the problem as involving 20 (or sometimes 40) different people. The reason for this is that I am an intuitive rather than literal reader, and when Eliezer mentioned stuff about copies of me, I just interpreted this as meaning to emphasize that each person has their own independent ‘subjective reality’. Really only meaning that each person doesn’t share observations with the others.
So all along, I thought this problem was about challenging the soundness of updating on a single independent observation involving yourself as though you are some kind of special reference frame.
… therefore, I don’t think you took this element out, but I’m glad you are resolving the meaning of “anthropic” because there are probably quite a few different “subjective realities” circulating about what the essence of this problem is.
Copies as in “upload your mind. then run 20 copies of the uploaded mind”.
And yes, I know there’s still tricky bits left in the problem, I merely established that those tricky bits didn’t derive from effects like mind copying or quantum suicide or anything like that and could instead show up in ordinary simple stuff, with no need to appeal to anthropic principles to produce the confusion. (sorry if that came out babbly, am getting tired)
if one started with no particular expectation of it being one bucket vs the other, ie, assigned 1:1 odds, then after updating upon seeing a green marble, one ought assign 9:1 odds, ie, probability 9⁄10, right?
because P(green) is not the probability that you will get a green
marble, it’s the probability that someone will get a green marble. From
the perspective of the priors, all the marbles are drawn, and no one
draw is different from any other. If you don’t draw a green marble,
you’re discarded and the people who did get a green vote. For the
purposes of figuring out the priors for a group strategy, your draw
being green is not an event.
Of course, you know that you’ve drawn green. But the only thing you can
translate it into that has a prior is “someone got green.”
That probably sounds contrived. Maybe it is. But consider a slightly
different example:
Two marbles and two people instead of twenty.
One marble is green, the other will be red or green based on a coin
flip (green on heads, red on tails).
I like this example because it combines the two conflicting intuitions
in the same problem. Only a fool would draw a red marble and remain
uncertain about the coin flip. But someone who draws a green marble is
in a situation similar to the twenty marble scenario.
If you were to plan ahead of time how the greens should vote, you would
tell them to assume 50%. But a person holding a green marble
might think it’s 2⁄3 in favor of double green.
To avoid embarrassing paradoxes, you can base everything on the four
events “heads,” “tails,” “someone gets green,” and “someone gets red.”
Update as normal.
yes, the probability that someone will get a green marble is rather different than the probability that I, personally, will get a green marble. But if I do personally get a green marble, that’s evidence in favor of green bucket.
The decision algorithm for how to respond to that though in this case is skewed due to the rules for the payout.
And in your example, if I drew green, I’d consider the 2⁄3 probability the correct one for whoever drew green.
Now, if there’s a payout scheme involved with funny business, that may alter some decisions, but not magically change my epistemology.
Um… okay… I’m not sure what we’re disagreeing about here, if anything:
my position is “given that I found myself with a green marble, it is right and proper for me to assign a 2⁄3 probability to both being green. However, the correct choice to make, given the pecuiluarities of this specific problem, may require one to make a decision that seems, on the surface, as if one didn’t update like that at all.”
Well, we might be saying the same thing but coming from different points of view about what it means. I’m not actually a bayesian, so when I talk about assigning probabilities and updating them, I just mean doing equations.
What I’m saying here is that you should set up the equations in a way that reflects the group’s point of view because you’re telling the group what to do. That involves plugging some probabilities of one into Bayes’ Law and getting a final answer equal to one of the starting numbers.
Again, if we randomly selected someone to ask, rather than having specified in advance that we’re going to make the decision depend on the unanimous response of all people in green rooms, then there would be no paradox. What you’re talking about here, pulling out a random marble, is the equivalent of asking a random single person from either green or red rooms. But this is not what we’re doing!
Either I’m misunderstanding something, or I wasn’t clear.
To make it explicit: EVERYONE who gets a green marble gets asked, and the outcome depends their consent being unanimous, just like everyone who wakes up in a green room gets asked. ie, all twenty rationalists draw a marble from the bucket, so that by the end, the bucket is empty.
Everyone who got a green marble gets asked for their decision, and the final outcome depends on all the answers. The bit about them drawing marbles individually is just to keep them from seeing what marbles the others got or being able to talk to each other once the marble drawing starts.
Unless I completely failed to comprehend some aspect of what’s going on here, this is effectively equivalent to the problem you described.
Oh, okay, that wasn’t clear actually. (Because I’m used to “they” being a genderless singular pronoun.) In that case these problems do indeed look equivalent.
Hm. Hm hm hm. I shall have to think about this. It is a an extremely good point. The more so as anyone who draws a green marble should indeed be assigning a 90% probability to there being a mostly-green bucket.
Sorry about the unclarity then. I probably should have explicitly stated a step by step “marble game procedure”.
My personal suggestion if you want an “anthropic reasoning is confooozing” situation would be the whole anthropic updating vs aumann agreement thing, since the disagreement would seem to be predictable in advance, and everyone involved would appear to be able to be expected to agree that the disagreement is right and proper. (ie, mad scientist sets up a quantum suicide experiment. Test subject survives. Test subject seems to have Bayesian evidence in favor of MWI vs single world, external observer mad scientist who sees the test subject/victim survive would seem to not have any particular new evidence favoring MWI over single world)
(Yes, I know I’ve brought up that subject several times, but it does seem, to me, to be a rather more blatant “something funny is going on here”)
(EDIT: okay, I guess this would count as quantum murder rather than quantum suicide, but you know what I mean.)
I don’t see how being assigned a green or red room is “anthropic” while being assigned a green or red marble is not anthropic.
I thought the anthropic part came from updating on your own individual experience in the absence of observing what observations others are making.
The difference wasn’t marble vs room but “copies of one being, so number of beings changed” vs “just gather 20 rationalists...”
But my whole point was “the original wasn’t really an anthropic situation, let me construct this alternate yet equivalent version to make that clear”
Do you think that the Sleeping Beauty problem is an anthropic one?
It probably counts as an instance of the general class of problems one would think of as an “anthropic problem”.
I see. I had always thought of the problem as involving 20 (or sometimes 40) different people. The reason for this is that I am an intuitive rather than literal reader, and when Eliezer mentioned stuff about copies of me, I just interpreted this as meaning to emphasize that each person has their own independent ‘subjective reality’. Really only meaning that each person doesn’t share observations with the others.
So all along, I thought this problem was about challenging the soundness of updating on a single independent observation involving yourself as though you are some kind of special reference frame.
… therefore, I don’t think you took this element out, but I’m glad you are resolving the meaning of “anthropic” because there are probably quite a few different “subjective realities” circulating about what the essence of this problem is.
Sorry for delay.
Copies as in “upload your mind. then run 20 copies of the uploaded mind”.
And yes, I know there’s still tricky bits left in the problem, I merely established that those tricky bits didn’t derive from effects like mind copying or quantum suicide or anything like that and could instead show up in ordinary simple stuff, with no need to appeal to anthropic principles to produce the confusion. (sorry if that came out babbly, am getting tired)
That’s funny: when Eliezer said “imagine there are two of you”, etc., I had assumed he meant two of us rationalists, etc.
I don’t think so. I think the answer to both these problems is that if you update correctly, you get 0.5.
*blinks* mind expanding on that?
P(green|mostly green bucket) = 18⁄20
P(green|mostly red bucket) = 2⁄20
likelihood ratio = 9
if one started with no particular expectation of it being one bucket vs the other, ie, assigned 1:1 odds, then after updating upon seeing a green marble, one ought assign 9:1 odds, ie, probability 9⁄10, right?
I guess that does need a lot of explaining.
I would say:
P(green|mostly green bucket) = 1
P(green|mostly red bucket) = 1
P(green) = 1
because P(green) is not the probability that you will get a green marble, it’s the probability that someone will get a green marble. From the perspective of the priors, all the marbles are drawn, and no one draw is different from any other. If you don’t draw a green marble, you’re discarded and the people who did get a green vote. For the purposes of figuring out the priors for a group strategy, your draw being green is not an event.
Of course, you know that you’ve drawn green. But the only thing you can translate it into that has a prior is “someone got green.”
That probably sounds contrived. Maybe it is. But consider a slightly different example:
Two marbles and two people instead of twenty.
One marble is green, the other will be red or green based on a coin flip (green on heads, red on tails).
I like this example because it combines the two conflicting intuitions in the same problem. Only a fool would draw a red marble and remain uncertain about the coin flip. But someone who draws a green marble is in a situation similar to the twenty marble scenario.
If you were to plan ahead of time how the greens should vote, you would tell them to assume 50%. But a person holding a green marble might think it’s 2⁄3 in favor of double green.
To avoid embarrassing paradoxes, you can base everything on the four events “heads,” “tails,” “someone gets green,” and “someone gets red.” Update as normal.
yes, the probability that someone will get a green marble is rather different than the probability that I, personally, will get a green marble. But if I do personally get a green marble, that’s evidence in favor of green bucket.
The decision algorithm for how to respond to that though in this case is skewed due to the rules for the payout.
And in your example, if I drew green, I’d consider the 2⁄3 probability the correct one for whoever drew green.
Now, if there’s a payout scheme involved with funny business, that may alter some decisions, but not magically change my epistemology.
What kind of funny business?
Let’s just say that you don’t draw blue.
OK, but I think Psy-Kosh was talking about something to do with the payoffs. I’m just not sure if he means the voting or the dollar amounts or what.
Sorry for delay. And yeah, I meant stuff like “only greens get to decide, and the decision needs to be unanimous” and so on
I agree that changes the answer. I was assuming a scheme like that in my two marble example. In a more typical situation, I would also say 2⁄3.
To me, it’s not a drastic (or magical) change, just getting a different answer to a different question.
Um… okay… I’m not sure what we’re disagreeing about here, if anything:
my position is “given that I found myself with a green marble, it is right and proper for me to assign a 2⁄3 probability to both being green. However, the correct choice to make, given the pecuiluarities of this specific problem, may require one to make a decision that seems, on the surface, as if one didn’t update like that at all.”
Well, we might be saying the same thing but coming from different points of view about what it means. I’m not actually a bayesian, so when I talk about assigning probabilities and updating them, I just mean doing equations.
What I’m saying here is that you should set up the equations in a way that reflects the group’s point of view because you’re telling the group what to do. That involves plugging some probabilities of one into Bayes’ Law and getting a final answer equal to one of the starting numbers.
So was I. But fortunately I was restrained enough to temper my uncouth humour with obscurity.