Actually… how is this an anthropic situation AT ALL?
I mean, wouldn’t it be equivalent to, say, gather 20 rational people (That understand PD, etc etc etc, and can certainly manage to agree to coordinate with each other) that are allowed to meet with each other in advance and discuss the situation...
I show up and tell them that I have two buckets of marbles, some of which are green, some of which are red
One bucket has 18 green and 2 red, and the other bucket has 18 red and 2 green.
I will (already have) flipped a logical coin. Depending on the outcome, I will use either one bucket or the other.
After having an opportunity to discuss strategy, they will be allowed to reach into the bucket without looking, pull out a marble, look at it, then, if it’s green choose if to pay and steal, etc etc etc. (in case it’s not obvious, the payout rules being equivalent to the OP)
As near as I can determine, this situation is entirely equivalent to the OP and is in no way an anthropic one. If the OP actually is an argument against anthropic updates in the presence of logical uncertainty… then it’s actually an argument against the general case of Bayesian updating in the presence of logical uncertainty, even when there’s no anthropic stuff going on at all!
EDIT: oh, in case it’s not obvious, marbles are not replaced after being drawn from the bucket.
Right, and this is a perspective very close to intuition for UDT: you consider different instances of yourself at different times as separate decision-makers that all share the common agenda (“global strategy”), coordinated “off-stage”, and implement it without change depending on circumstances they encounter in each particular situation. The “off-stageness” of coordination is more naturally described by TDT, which allows considering different agents as UDT-instances of the same strategy, but the precise way in which it happens remains magic.
Nesov, the reason why I regard Dai’s formulation of UDT as such a significant improvement over your own is that it does not require offstage coordination. Offstage coordination requires a base theory and a privileged vantage point and, as you say, magic.
Nesov, the reason why I regard Dai’s formulation of UDT as such a significant improvement over your own is that it does not require offstage coordination. Offstage coordination requires a base theory and a privileged vantage point and, as you say, magic.
I still don’t understand this emphasis. Here I sketched in what sense I mean the global solution—it’s more about definition of preference than the actual computations and actions that the agents make (locally). There is an abstract concept of global strategy that can be characterized as being “offstage”, but there is no offstage computation or offstage coordination, and in general complete computation of global strategy isn’t performed even locally—only approximations, often approximations that make it impossible to implement the globally best solution.
In the above comment, by “magic” I referred to exact mechanism that says in what way and to what extent different agents are running the same algorithm, which is more in the domain of TDT, UDT generally not talking about separate agents, only different possible states of the same agent. Which is why neither concept solves the bargaining problem: it’s out of UDT’s domain, and TDT takes the relevant pieces of the puzzle as given, in its causal graphs.
For further disambiguation, see for example this comment you made:
We’re taking apart your “mathematical intuition” into something that invents a causal graph (this part is still magic) and a part that updates a causal graph “given that your output is Y” (Pearl says how to do this).
Agreed. But I seem to recall seeing some comments about distinguishing between quantum and logical uncertainty, etc etc, so figured may as well say that it at least is equivalent given that it’s the same type of uncertainty as in the original problem and so on...
Again, if we randomly selected someone to ask, rather than having specified in advance that we’re going to make the decision depend on the unanimous response of all people in green rooms, then there would be no paradox. What you’re talking about here, pulling out a random marble, is the equivalent of asking a random single person from either green or red rooms. But this is not what we’re doing!
Either I’m misunderstanding something, or I wasn’t clear.
To make it explicit: EVERYONE who gets a green marble gets asked, and the outcome depends their consent being unanimous, just like everyone who wakes up in a green room gets asked. ie, all twenty rationalists draw a marble from the bucket, so that by the end, the bucket is empty.
Everyone who got a green marble gets asked for their decision, and the final outcome depends on all the answers. The bit about them drawing marbles individually is just to keep them from seeing what marbles the others got or being able to talk to each other once the marble drawing starts.
Unless I completely failed to comprehend some aspect of what’s going on here, this is effectively equivalent to the problem you described.
Oh, okay, that wasn’t clear actually. (Because I’m used to “they” being a genderless singular pronoun.) In that case these problems do indeed look equivalent.
Hm. Hm hm hm. I shall have to think about this. It is a an extremely good point. The more so as anyone who draws a green marble should indeed be assigning a 90% probability to there being a mostly-green bucket.
Sorry about the unclarity then. I probably should have explicitly stated a step by step “marble game procedure”.
My personal suggestion if you want an “anthropic reasoning is confooozing” situation would be the whole anthropic updating vs aumann agreement thing, since the disagreement would seem to be predictable in advance, and everyone involved would appear to be able to be expected to agree that the disagreement is right and proper. (ie, mad scientist sets up a quantum suicide experiment. Test subject survives. Test subject seems to have Bayesian evidence in favor of MWI vs single world, external observer mad scientist who sees the test subject/victim survive would seem to not have any particular new evidence favoring MWI over single world)
(Yes, I know I’ve brought up that subject several times, but it does seem, to me, to be a rather more blatant “something funny is going on here”)
(EDIT: okay, I guess this would count as quantum murder rather than quantum suicide, but you know what I mean.)
I see. I had always thought of the problem as involving 20 (or sometimes 40) different people. The reason for this is that I am an intuitive rather than literal reader, and when Eliezer mentioned stuff about copies of me, I just interpreted this as meaning to emphasize that each person has their own independent ‘subjective reality’. Really only meaning that each person doesn’t share observations with the others.
So all along, I thought this problem was about challenging the soundness of updating on a single independent observation involving yourself as though you are some kind of special reference frame.
… therefore, I don’t think you took this element out, but I’m glad you are resolving the meaning of “anthropic” because there are probably quite a few different “subjective realities” circulating about what the essence of this problem is.
Copies as in “upload your mind. then run 20 copies of the uploaded mind”.
And yes, I know there’s still tricky bits left in the problem, I merely established that those tricky bits didn’t derive from effects like mind copying or quantum suicide or anything like that and could instead show up in ordinary simple stuff, with no need to appeal to anthropic principles to produce the confusion. (sorry if that came out babbly, am getting tired)
if one started with no particular expectation of it being one bucket vs the other, ie, assigned 1:1 odds, then after updating upon seeing a green marble, one ought assign 9:1 odds, ie, probability 9⁄10, right?
because P(green) is not the probability that you will get a green
marble, it’s the probability that someone will get a green marble. From
the perspective of the priors, all the marbles are drawn, and no one
draw is different from any other. If you don’t draw a green marble,
you’re discarded and the people who did get a green vote. For the
purposes of figuring out the priors for a group strategy, your draw
being green is not an event.
Of course, you know that you’ve drawn green. But the only thing you can
translate it into that has a prior is “someone got green.”
That probably sounds contrived. Maybe it is. But consider a slightly
different example:
Two marbles and two people instead of twenty.
One marble is green, the other will be red or green based on a coin
flip (green on heads, red on tails).
I like this example because it combines the two conflicting intuitions
in the same problem. Only a fool would draw a red marble and remain
uncertain about the coin flip. But someone who draws a green marble is
in a situation similar to the twenty marble scenario.
If you were to plan ahead of time how the greens should vote, you would
tell them to assume 50%. But a person holding a green marble
might think it’s 2⁄3 in favor of double green.
To avoid embarrassing paradoxes, you can base everything on the four
events “heads,” “tails,” “someone gets green,” and “someone gets red.”
Update as normal.
yes, the probability that someone will get a green marble is rather different than the probability that I, personally, will get a green marble. But if I do personally get a green marble, that’s evidence in favor of green bucket.
The decision algorithm for how to respond to that though in this case is skewed due to the rules for the payout.
And in your example, if I drew green, I’d consider the 2⁄3 probability the correct one for whoever drew green.
Now, if there’s a payout scheme involved with funny business, that may alter some decisions, but not magically change my epistemology.
Um… okay… I’m not sure what we’re disagreeing about here, if anything:
my position is “given that I found myself with a green marble, it is right and proper for me to assign a 2⁄3 probability to both being green. However, the correct choice to make, given the pecuiluarities of this specific problem, may require one to make a decision that seems, on the surface, as if one didn’t update like that at all.”
Well, we might be saying the same thing but coming from different points of view about what it means. I’m not actually a bayesian, so when I talk about assigning probabilities and updating them, I just mean doing equations.
What I’m saying here is that you should set up the equations in a way that reflects the group’s point of view because you’re telling the group what to do. That involves plugging some probabilities of one into Bayes’ Law and getting a final answer equal to one of the starting numbers.
Good point. After thinking about this for a while, I feel comfortable simultaneously holding these views:
1) You shouldn’t do anthropic updates. (i.e. update on the fact that you exist)
2) The example posed in the top-level post is not an example of anthropic reasoning, but reasoning on specific givens and observations, as are most supposed examples of anthropic reasoning.
3) Any evidence arising from the fact that you exist is implicitly contained by your observations by virtue of their existence.
Wikipedia gives one example of a productive use of the anthropic principle, but it appears to be reasoning based on observations of the type of life-form we are, as well as other hard-won biochemical knowledge, well above and beyond the observation that we exist.
I don’t THINK I agree with your point 1. ie, I favor saying yes to anthropic updates, but I admit that there’s definitely confusing issues here.
Mind expanding on point 3? I think I get what you’re saying, but in general we filter out that part our observations, that is, the fact that observations are occurring at all, Getting that back is the point of anthropic updating. Actually… IIRC, Nick Bostrom’s way of talking about anthropic updates more or less is exactly your point 3 in reverse… ie, near as I can determine and recall, his position explicitly advocates talking about the significance that observations are occurring at all as part of the usual update based on observation. Maybe I’m misremembering though.
Also, separating it out into a single anthropic update and then treating all observations as conditional on your existence or such helps avoid double counting that aspect, right?
Reading the link, the second paper’s abstract, and most of Scott Aaronson’s post, it looks to me like they’re not using anthropic reasoning at all. Robin Hanson summarizes their “entropic principle” (and the abstract and all discussion agree with his summary) as
since observers need entropy gains to function physically, we can estimate the probability that any small spacetime volume contains an observer to be proportional to the entropy gain in that volume.
The problem is that “observer” is not the same as “anthrop-” (human). This principle is just a subtle restatement of either a tautology or known physical law. Because it’s not that “observers need entropy gains”. Rather, observation is entropy gain. To observe something is to increase one’s mutual information with it. But since phase space is conserved, all gains in mutual information must be offset by an increase in entropy.
But since “observers” are simply anything that forms mutual information with something else, it doesn’t mean a conscious observer, let alone a human one. For that, you’d need to go beyond P(entropy gain|observer) to P(consciousness|entropy gain).
(I’m a bit distressed no one else made this point.)
Now, this idea could lead to an insight if you endorsed some neo-animistic view that consciousness is proportional to normalized rate of mutual information increase, and so humans are (as) conscious (as we are) because we’re above some threshold … but again, you’d be using nothing from your existence as such.
The argument was “higher rate of entropy production is correlated with more observers, probably. So we should expect to find ourselves in chunks of reality that have high rates of entropy production”
I guess it wasn’t just observers, but (non reversible) computations
ie, anthropic reasoning was the justification for using the entropy production criteria in the first place. Yes, there is a question of fractions of observers that are conscious, etc… but a universe that can’t support much in the way of observers at all probably can’t support much in the way of conscious observers, while a universe that can support lots of observers can probably support more conscious observers than the other, right?
Now I’m not understanding how your response applies.
My point was: the entropic principle estimates the probability of observers per unit volume by using the entropy per unit volume. But this follows immediately from the second law and conservation of phase space; it’s necessarily true.
To the extent that it assigns a probability to a class that includes us, it does a poor job, because we make up a tiny fraction of the “observers” (appropriately defined) in the universe.
The situation is not identical in the non-anthropic case in that there are equal numbers of rooms but differing numbers of marbles.
There’s only one green room (so observing it is evidence for heads-green with p=0.5) whereas there are 18 green marbles, so p(heads|green)= ((18/20)/0.5)*0.5 = 0.9.
Actually… how is this an anthropic situation AT ALL?
I mean, wouldn’t it be equivalent to, say, gather 20 rational people (That understand PD, etc etc etc, and can certainly manage to agree to coordinate with each other) that are allowed to meet with each other in advance and discuss the situation...
I show up and tell them that I have two buckets of marbles, some of which are green, some of which are red
One bucket has 18 green and 2 red, and the other bucket has 18 red and 2 green.
I will (already have) flipped a logical coin. Depending on the outcome, I will use either one bucket or the other.
After having an opportunity to discuss strategy, they will be allowed to reach into the bucket without looking, pull out a marble, look at it, then, if it’s green choose if to pay and steal, etc etc etc. (in case it’s not obvious, the payout rules being equivalent to the OP)
As near as I can determine, this situation is entirely equivalent to the OP and is in no way an anthropic one. If the OP actually is an argument against anthropic updates in the presence of logical uncertainty… then it’s actually an argument against the general case of Bayesian updating in the presence of logical uncertainty, even when there’s no anthropic stuff going on at all!
EDIT: oh, in case it’s not obvious, marbles are not replaced after being drawn from the bucket.
Right, and this is a perspective very close to intuition for UDT: you consider different instances of yourself at different times as separate decision-makers that all share the common agenda (“global strategy”), coordinated “off-stage”, and implement it without change depending on circumstances they encounter in each particular situation. The “off-stageness” of coordination is more naturally described by TDT, which allows considering different agents as UDT-instances of the same strategy, but the precise way in which it happens remains magic.
Nesov, the reason why I regard Dai’s formulation of UDT as such a significant improvement over your own is that it does not require offstage coordination. Offstage coordination requires a base theory and a privileged vantage point and, as you say, magic.
I still don’t understand this emphasis. Here I sketched in what sense I mean the global solution—it’s more about definition of preference than the actual computations and actions that the agents make (locally). There is an abstract concept of global strategy that can be characterized as being “offstage”, but there is no offstage computation or offstage coordination, and in general complete computation of global strategy isn’t performed even locally—only approximations, often approximations that make it impossible to implement the globally best solution.
In the above comment, by “magic” I referred to exact mechanism that says in what way and to what extent different agents are running the same algorithm, which is more in the domain of TDT, UDT generally not talking about separate agents, only different possible states of the same agent. Which is why neither concept solves the bargaining problem: it’s out of UDT’s domain, and TDT takes the relevant pieces of the puzzle as given, in its causal graphs.
For further disambiguation, see for example this comment you made:
That uncertainty is logical seems to be irrelevant here.
Agreed. But I seem to recall seeing some comments about distinguishing between quantum and logical uncertainty, etc etc, so figured may as well say that it at least is equivalent given that it’s the same type of uncertainty as in the original problem and so on...
Again, if we randomly selected someone to ask, rather than having specified in advance that we’re going to make the decision depend on the unanimous response of all people in green rooms, then there would be no paradox. What you’re talking about here, pulling out a random marble, is the equivalent of asking a random single person from either green or red rooms. But this is not what we’re doing!
Either I’m misunderstanding something, or I wasn’t clear.
To make it explicit: EVERYONE who gets a green marble gets asked, and the outcome depends their consent being unanimous, just like everyone who wakes up in a green room gets asked. ie, all twenty rationalists draw a marble from the bucket, so that by the end, the bucket is empty.
Everyone who got a green marble gets asked for their decision, and the final outcome depends on all the answers. The bit about them drawing marbles individually is just to keep them from seeing what marbles the others got or being able to talk to each other once the marble drawing starts.
Unless I completely failed to comprehend some aspect of what’s going on here, this is effectively equivalent to the problem you described.
Oh, okay, that wasn’t clear actually. (Because I’m used to “they” being a genderless singular pronoun.) In that case these problems do indeed look equivalent.
Hm. Hm hm hm. I shall have to think about this. It is a an extremely good point. The more so as anyone who draws a green marble should indeed be assigning a 90% probability to there being a mostly-green bucket.
Sorry about the unclarity then. I probably should have explicitly stated a step by step “marble game procedure”.
My personal suggestion if you want an “anthropic reasoning is confooozing” situation would be the whole anthropic updating vs aumann agreement thing, since the disagreement would seem to be predictable in advance, and everyone involved would appear to be able to be expected to agree that the disagreement is right and proper. (ie, mad scientist sets up a quantum suicide experiment. Test subject survives. Test subject seems to have Bayesian evidence in favor of MWI vs single world, external observer mad scientist who sees the test subject/victim survive would seem to not have any particular new evidence favoring MWI over single world)
(Yes, I know I’ve brought up that subject several times, but it does seem, to me, to be a rather more blatant “something funny is going on here”)
(EDIT: okay, I guess this would count as quantum murder rather than quantum suicide, but you know what I mean.)
I don’t see how being assigned a green or red room is “anthropic” while being assigned a green or red marble is not anthropic.
I thought the anthropic part came from updating on your own individual experience in the absence of observing what observations others are making.
The difference wasn’t marble vs room but “copies of one being, so number of beings changed” vs “just gather 20 rationalists...”
But my whole point was “the original wasn’t really an anthropic situation, let me construct this alternate yet equivalent version to make that clear”
Do you think that the Sleeping Beauty problem is an anthropic one?
It probably counts as an instance of the general class of problems one would think of as an “anthropic problem”.
I see. I had always thought of the problem as involving 20 (or sometimes 40) different people. The reason for this is that I am an intuitive rather than literal reader, and when Eliezer mentioned stuff about copies of me, I just interpreted this as meaning to emphasize that each person has their own independent ‘subjective reality’. Really only meaning that each person doesn’t share observations with the others.
So all along, I thought this problem was about challenging the soundness of updating on a single independent observation involving yourself as though you are some kind of special reference frame.
… therefore, I don’t think you took this element out, but I’m glad you are resolving the meaning of “anthropic” because there are probably quite a few different “subjective realities” circulating about what the essence of this problem is.
Sorry for delay.
Copies as in “upload your mind. then run 20 copies of the uploaded mind”.
And yes, I know there’s still tricky bits left in the problem, I merely established that those tricky bits didn’t derive from effects like mind copying or quantum suicide or anything like that and could instead show up in ordinary simple stuff, with no need to appeal to anthropic principles to produce the confusion. (sorry if that came out babbly, am getting tired)
That’s funny: when Eliezer said “imagine there are two of you”, etc., I had assumed he meant two of us rationalists, etc.
I don’t think so. I think the answer to both these problems is that if you update correctly, you get 0.5.
*blinks* mind expanding on that?
P(green|mostly green bucket) = 18⁄20
P(green|mostly red bucket) = 2⁄20
likelihood ratio = 9
if one started with no particular expectation of it being one bucket vs the other, ie, assigned 1:1 odds, then after updating upon seeing a green marble, one ought assign 9:1 odds, ie, probability 9⁄10, right?
I guess that does need a lot of explaining.
I would say:
P(green|mostly green bucket) = 1
P(green|mostly red bucket) = 1
P(green) = 1
because P(green) is not the probability that you will get a green marble, it’s the probability that someone will get a green marble. From the perspective of the priors, all the marbles are drawn, and no one draw is different from any other. If you don’t draw a green marble, you’re discarded and the people who did get a green vote. For the purposes of figuring out the priors for a group strategy, your draw being green is not an event.
Of course, you know that you’ve drawn green. But the only thing you can translate it into that has a prior is “someone got green.”
That probably sounds contrived. Maybe it is. But consider a slightly different example:
Two marbles and two people instead of twenty.
One marble is green, the other will be red or green based on a coin flip (green on heads, red on tails).
I like this example because it combines the two conflicting intuitions in the same problem. Only a fool would draw a red marble and remain uncertain about the coin flip. But someone who draws a green marble is in a situation similar to the twenty marble scenario.
If you were to plan ahead of time how the greens should vote, you would tell them to assume 50%. But a person holding a green marble might think it’s 2⁄3 in favor of double green.
To avoid embarrassing paradoxes, you can base everything on the four events “heads,” “tails,” “someone gets green,” and “someone gets red.” Update as normal.
yes, the probability that someone will get a green marble is rather different than the probability that I, personally, will get a green marble. But if I do personally get a green marble, that’s evidence in favor of green bucket.
The decision algorithm for how to respond to that though in this case is skewed due to the rules for the payout.
And in your example, if I drew green, I’d consider the 2⁄3 probability the correct one for whoever drew green.
Now, if there’s a payout scheme involved with funny business, that may alter some decisions, but not magically change my epistemology.
What kind of funny business?
Let’s just say that you don’t draw blue.
OK, but I think Psy-Kosh was talking about something to do with the payoffs. I’m just not sure if he means the voting or the dollar amounts or what.
Sorry for delay. And yeah, I meant stuff like “only greens get to decide, and the decision needs to be unanimous” and so on
I agree that changes the answer. I was assuming a scheme like that in my two marble example. In a more typical situation, I would also say 2⁄3.
To me, it’s not a drastic (or magical) change, just getting a different answer to a different question.
Um… okay… I’m not sure what we’re disagreeing about here, if anything:
my position is “given that I found myself with a green marble, it is right and proper for me to assign a 2⁄3 probability to both being green. However, the correct choice to make, given the pecuiluarities of this specific problem, may require one to make a decision that seems, on the surface, as if one didn’t update like that at all.”
Well, we might be saying the same thing but coming from different points of view about what it means. I’m not actually a bayesian, so when I talk about assigning probabilities and updating them, I just mean doing equations.
What I’m saying here is that you should set up the equations in a way that reflects the group’s point of view because you’re telling the group what to do. That involves plugging some probabilities of one into Bayes’ Law and getting a final answer equal to one of the starting numbers.
So was I. But fortunately I was restrained enough to temper my uncouth humour with obscurity.
Very enlightening!
It just shows that the OP was an overcomplicated example generating confusion about the update.
[EDIT] Deleted rest of the comment due to revised opinion here: http://lesswrong.com/lw/17c/outlawing_anthropics_an_updateless_dilemma/13hk
Good point. After thinking about this for a while, I feel comfortable simultaneously holding these views:
1) You shouldn’t do anthropic updates. (i.e. update on the fact that you exist)
2) The example posed in the top-level post is not an example of anthropic reasoning, but reasoning on specific givens and observations, as are most supposed examples of anthropic reasoning.
3) Any evidence arising from the fact that you exist is implicitly contained by your observations by virtue of their existence.
Wikipedia gives one example of a productive use of the anthropic principle, but it appears to be reasoning based on observations of the type of life-form we are, as well as other hard-won biochemical knowledge, well above and beyond the observation that we exist.
Thanks.
I don’t THINK I agree with your point 1. ie, I favor saying yes to anthropic updates, but I admit that there’s definitely confusing issues here.
Mind expanding on point 3? I think I get what you’re saying, but in general we filter out that part our observations, that is, the fact that observations are occurring at all, Getting that back is the point of anthropic updating. Actually… IIRC, Nick Bostrom’s way of talking about anthropic updates more or less is exactly your point 3 in reverse… ie, near as I can determine and recall, his position explicitly advocates talking about the significance that observations are occurring at all as part of the usual update based on observation. Maybe I’m misremembering though.
Also, separating it out into a single anthropic update and then treating all observations as conditional on your existence or such helps avoid double counting that aspect, right?
Also, here’s another physics example, a bit more recent that was discussed on OB a while back.
Reading the link, the second paper’s abstract, and most of Scott Aaronson’s post, it looks to me like they’re not using anthropic reasoning at all. Robin Hanson summarizes their “entropic principle” (and the abstract and all discussion agree with his summary) as
The problem is that “observer” is not the same as “anthrop-” (human). This principle is just a subtle restatement of either a tautology or known physical law. Because it’s not that “observers need entropy gains”. Rather, observation is entropy gain. To observe something is to increase one’s mutual information with it. But since phase space is conserved, all gains in mutual information must be offset by an increase in entropy.
But since “observers” are simply anything that forms mutual information with something else, it doesn’t mean a conscious observer, let alone a human one. For that, you’d need to go beyond P(entropy gain|observer) to P(consciousness|entropy gain).
(I’m a bit distressed no one else made this point.)
Now, this idea could lead to an insight if you endorsed some neo-animistic view that consciousness is proportional to normalized rate of mutual information increase, and so humans are (as) conscious (as we are) because we’re above some threshold … but again, you’d be using nothing from your existence as such.
The argument was “higher rate of entropy production is correlated with more observers, probably. So we should expect to find ourselves in chunks of reality that have high rates of entropy production”
I guess it wasn’t just observers, but (non reversible) computations
ie, anthropic reasoning was the justification for using the entropy production criteria in the first place. Yes, there is a question of fractions of observers that are conscious, etc… but a universe that can’t support much in the way of observers at all probably can’t support much in the way of conscious observers, while a universe that can support lots of observers can probably support more conscious observers than the other, right?
Or did I misunderstand your point?
Now I’m not understanding how your response applies.
My point was: the entropic principle estimates the probability of observers per unit volume by using the entropy per unit volume. But this follows immediately from the second law and conservation of phase space; it’s necessarily true.
To the extent that it assigns a probability to a class that includes us, it does a poor job, because we make up a tiny fraction of the “observers” (appropriately defined) in the universe.
The situation is not identical in the non-anthropic case in that there are equal numbers of rooms but differing numbers of marbles.
There’s only one green room (so observing it is evidence for heads-green with p=0.5) whereas there are 18 green marbles, so p(heads|green)= ((18/20)/0.5)*0.5 = 0.9.
Sorry for delayed response.
Anyways, how so? 20 rooms in the original problem, 20 marbles in mine.
what fraction are green vs red derives from examining a logical coin, etc etc etc… I’m not sure where you’re getting the only one green room thing.