Thinking about how all the green-room people come to the wrong conclusion makes my brain hurt. But I suppose, finally, it is true. They cannot base their decision on their subjective experience, and here I’ll outline some thoughts I’ve had as to under what conditions they should know they cannot do so.
Suppose there are 20 people (Amy, Benny, Carrie, Donny, …) and this experiment is done as described. If we always ask Tony (the 20th person) whether or not to say “yes”, and he bases his decision on whether or not he is in a green room, then the expected value of his decision really is $5.6. Tony here is a special, singled out “decider”. One way of looking at this situation is that the ‘yes’ depends on some information in the system (that is, whether or not Tony was in a green room.)
If instead we say that the decider can be anyone, and in fact we choose the decider after the assortment into rooms as someone in a green room, then we are not really given any information about the system.
It is the difference between (a) picking a person, and seeing if they wake up in a green room, and (b) picking a person that is in a green room. (I know you are well aware of this difference, but it helps to spell it out.)
You can’t pick the deciders from a set with a prespecified outcome. It’s a pointer problem: You can learn about the system from the change of state from Tony to Tony (Tony: no room -->Tony: green room), but you can’t assign* the star after the assignment (pick someone in a green room and ask them).
When a person wakes in a green room and is asked, they should say ‘yes’ if they are randomly chosen to be asked independently of their room color. If they were chosen after the assignment, because they awoke in a green room, they should recognize this as the “unfixed pointer problem” (a special kind of selection bias).
Avoiding the pointer problem is straight-forward. The people who wake in red rooms have a posterior probability of heads as 10%. The people who wake in green rooms have a posterior probability of heads as 90%. Your posterior probability is meaningful only if your posterior probability could have been either way. Since Eliezer only asks people who woke in green rooms, and never asks people who woke in red rooms, the posterior probabilities are not meaningful.
The people who wake in red rooms have a posterior probability of heads as 10%. The people who wake in green rooms have a posterior probability of heads as 90%. Your posterior probability is meaningful only if your posterior probability could have been either way. Since Eliezer only asks people who woke in green rooms, and never asks people who woke in red rooms, the posterior probabilities are not meaningful.
The rest of your reply makes sense to me, but can I ask you to amplify on this? Maybe I’m being naive, but to me, a 90% probability is a 90% probability and I use it in all my strategic choices. At least that’s what I started out thinking.
Now you’ve just shown that a decision process won’t want to strategically condition on this “90% probability”, because it always ends up as “90% probability” regardless of the true state of affairs, and so is not strategically informative to green agents—even if the probability seems well-calibrated in the sense that, looking over impossible possible worlds, green agents who say “90%” are correct 9 times out of 10. This seems like a conflict between an anthropic sense of probability (relative frequency in a population of observers) and a strategic sense of probability (summarizing information that is to be used to make decisions), or something along those lines. Is this where you’re pointing toward by saying that a posterior probability is meaningful at some times but not others?
a decision process won’t want to strategically condition on this “90% probability”, because it always ends up as “90% probability” regardless of the true state of affairs, and so is not strategically informative to green agents
The 90% probability is generally strategically informative to green agents. They may legitimately point to themselves for information about the world, but in this specific case, there is confusion about who is doing the pointing.
When you think about a problem anthropically, you yourself are the pointer (the thing you are observing before and after to make an observation) and you assign yourself as the pointer. This is going to be strategically sound in all cases in which you don’t change as the pointer before and after an observation. (A pretty normal condition. Exceptions would be experiments in which you try to determine the probability that a certain activity is fatal to yourself—you will never be able to figure out the probability that you will die of your shrimp allergy by repeated trials of consuming shrimp, as it will become increasingly skewed towards lower and lower values.)
Likewise, if I am in the experiment described in the post and I awaken in a green room I should answer “yes” to your question if I determine that you asked me randomly. That is, that you would have asked me even if I woke in a red room. In which case my anthropic observation that there is a 90% probability that heads was flipped is quite sound, as usual.
On the other hand, if you ask me only if I wake in a green room, then you wouldn’t have asked “me” if I awoke in a red room. (So I must realize this isn’t really about me assigning myself as a pointer, because “me” doesn’t change depending on what room I wake up in.) It’s strange and requires some mental gymnastics for me to understand that you Eliezer are picking the pointer in this case, even though you are asking me about my anthropic observation, for which I would usually expect to assign myself as the pointer.
So for me this is a pointer/biased-observation problem. But the anthropic problem is related, because we as humans cannot ask about the probability of currently observed events based on the frequency of observations which, had they been otherwise, would not have permitted ourselves to ask the question.
On the other hand, if you ask me only if I wake in a green room, then you wouldn’t have asked “me” if I awoke in a red room. (So I must realize this isn’t really about me assigning myself as a pointer, because “me” doesn’t change depending on what room I wake up in.)
Huh. Very interesting again. So in other words, the probability that I would use for myself, is not the probability that I should be using to answer questions from this decision process, because the decision process is using a different kind of pointer than my me-ness?
How would one formalize this? Bostrom’s division-of-responsibility principle?
I haven’t had time to read this, but it looks possibly relevant (it talks about the importance of whether an observation point is fixed in advance or not) and also possibly interesting, as it compares Bayesian and frequentist views.
I will read it when I have time later… or anyone else is welcome to if they have time/interest.
What I got out of the article above, since I skipped all the technical math, was that frequentists consider “the pointer problem” (i.e., just your usual selection bias) as something that needs correction while Bayesians don’t correct in these cases. The author concludes (I trust, via some kind of argument) that Bayesian’s don’t need to correct if they choose the posteriors carefully enough.
I now see that I was being entirely consistent with my role as the resident frequentist when I identified this as a “pointer problem” problem (which it is) but that doesn’t mean the problem can’t be pushed through without correction* -- the Bayesian way—by carefully considering the priors.
*”Requiring correction” then might be a euphemism for time-dependent, while a preference for an updateless decision theory is a good Bayesian quality. A quality, by the way, a frequentist can appreciate as well, so this might be a point of contact on which to win frequentists over.
Thinking about how all the green-room people come to the wrong conclusion makes my brain hurt. But I suppose, finally, it is true. They cannot base their decision on their subjective experience, and here I’ll outline some thoughts I’ve had as to under what conditions they should know they cannot do so.
Suppose there are 20 people (Amy, Benny, Carrie, Donny, …) and this experiment is done as described. If we always ask Tony (the 20th person) whether or not to say “yes”, and he bases his decision on whether or not he is in a green room, then the expected value of his decision really is $5.6. Tony here is a special, singled out “decider”. One way of looking at this situation is that the ‘yes’ depends on some information in the system (that is, whether or not Tony was in a green room.)
If instead we say that the decider can be anyone, and in fact we choose the decider after the assortment into rooms as someone in a green room, then we are not really given any information about the system.
It is the difference between (a) picking a person, and seeing if they wake up in a green room, and (b) picking a person that is in a green room. (I know you are well aware of this difference, but it helps to spell it out.)
You can’t pick the deciders from a set with a prespecified outcome. It’s a pointer problem: You can learn about the system from the change of state from Tony to Tony (Tony: no room -->Tony: green room), but you can’t assign* the star after the assignment (pick someone in a green room and ask them).
When a person wakes in a green room and is asked, they should say ‘yes’ if they are randomly chosen to be asked independently of their room color. If they were chosen after the assignment, because they awoke in a green room, they should recognize this as the “unfixed pointer problem” (a special kind of selection bias).
Avoiding the pointer problem is straight-forward. The people who wake in red rooms have a posterior probability of heads as 10%. The people who wake in green rooms have a posterior probability of heads as 90%. Your posterior probability is meaningful only if your posterior probability could have been either way. Since Eliezer only asks people who woke in green rooms, and never asks people who woke in red rooms, the posterior probabilities are not meaningful.
The rest of your reply makes sense to me, but can I ask you to amplify on this? Maybe I’m being naive, but to me, a 90% probability is a 90% probability and I use it in all my strategic choices. At least that’s what I started out thinking.
Now you’ve just shown that a decision process won’t want to strategically condition on this “90% probability”, because it always ends up as “90% probability” regardless of the true state of affairs, and so is not strategically informative to green agents—even if the probability seems well-calibrated in the sense that, looking over impossible possible worlds, green agents who say “90%” are correct 9 times out of 10. This seems like a conflict between an anthropic sense of probability (relative frequency in a population of observers) and a strategic sense of probability (summarizing information that is to be used to make decisions), or something along those lines. Is this where you’re pointing toward by saying that a posterior probability is meaningful at some times but not others?
The 90% probability is generally strategically informative to green agents. They may legitimately point to themselves for information about the world, but in this specific case, there is confusion about who is doing the pointing.
When you think about a problem anthropically, you yourself are the pointer (the thing you are observing before and after to make an observation) and you assign yourself as the pointer. This is going to be strategically sound in all cases in which you don’t change as the pointer before and after an observation. (A pretty normal condition. Exceptions would be experiments in which you try to determine the probability that a certain activity is fatal to yourself—you will never be able to figure out the probability that you will die of your shrimp allergy by repeated trials of consuming shrimp, as it will become increasingly skewed towards lower and lower values.)
Likewise, if I am in the experiment described in the post and I awaken in a green room I should answer “yes” to your question if I determine that you asked me randomly. That is, that you would have asked me even if I woke in a red room. In which case my anthropic observation that there is a 90% probability that heads was flipped is quite sound, as usual.
On the other hand, if you ask me only if I wake in a green room, then you wouldn’t have asked “me” if I awoke in a red room. (So I must realize this isn’t really about me assigning myself as a pointer, because “me” doesn’t change depending on what room I wake up in.) It’s strange and requires some mental gymnastics for me to understand that you Eliezer are picking the pointer in this case, even though you are asking me about my anthropic observation, for which I would usually expect to assign myself as the pointer.
So for me this is a pointer/biased-observation problem. But the anthropic problem is related, because we as humans cannot ask about the probability of currently observed events based on the frequency of observations which, had they been otherwise, would not have permitted ourselves to ask the question.
Huh. Very interesting again. So in other words, the probability that I would use for myself, is not the probability that I should be using to answer questions from this decision process, because the decision process is using a different kind of pointer than my me-ness?
How would one formalize this? Bostrom’s division-of-responsibility principle?
I haven’t had time to read this, but it looks possibly relevant (it talks about the importance of whether an observation point is fixed in advance or not) and also possibly interesting, as it compares Bayesian and frequentist views.
I will read it when I have time later… or anyone else is welcome to if they have time/interest.
What I got out of the article above, since I skipped all the technical math, was that frequentists consider “the pointer problem” (i.e., just your usual selection bias) as something that needs correction while Bayesians don’t correct in these cases. The author concludes (I trust, via some kind of argument) that Bayesian’s don’t need to correct if they choose the posteriors carefully enough.
I now see that I was being entirely consistent with my role as the resident frequentist when I identified this as a “pointer problem” problem (which it is) but that doesn’t mean the problem can’t be pushed through without correction* -- the Bayesian way—by carefully considering the priors.
*”Requiring correction” then might be a euphemism for time-dependent, while a preference for an updateless decision theory is a good Bayesian quality. A quality, by the way, a frequentist can appreciate as well, so this might be a point of contact on which to win frequentists over.