On the other hand, if you ask me only if I wake in a green room, then you wouldn’t have asked “me” if I awoke in a red room. (So I must realize this isn’t really about me assigning myself as a pointer, because “me” doesn’t change depending on what room I wake up in.)
Huh. Very interesting again. So in other words, the probability that I would use for myself, is not the probability that I should be using to answer questions from this decision process, because the decision process is using a different kind of pointer than my me-ness?
How would one formalize this? Bostrom’s division-of-responsibility principle?
I haven’t had time to read this, but it looks possibly relevant (it talks about the importance of whether an observation point is fixed in advance or not) and also possibly interesting, as it compares Bayesian and frequentist views.
I will read it when I have time later… or anyone else is welcome to if they have time/interest.
What I got out of the article above, since I skipped all the technical math, was that frequentists consider “the pointer problem” (i.e., just your usual selection bias) as something that needs correction while Bayesians don’t correct in these cases. The author concludes (I trust, via some kind of argument) that Bayesian’s don’t need to correct if they choose the posteriors carefully enough.
I now see that I was being entirely consistent with my role as the resident frequentist when I identified this as a “pointer problem” problem (which it is) but that doesn’t mean the problem can’t be pushed through without correction* -- the Bayesian way—by carefully considering the priors.
*”Requiring correction” then might be a euphemism for time-dependent, while a preference for an updateless decision theory is a good Bayesian quality. A quality, by the way, a frequentist can appreciate as well, so this might be a point of contact on which to win frequentists over.
Huh. Very interesting again. So in other words, the probability that I would use for myself, is not the probability that I should be using to answer questions from this decision process, because the decision process is using a different kind of pointer than my me-ness?
How would one formalize this? Bostrom’s division-of-responsibility principle?
I haven’t had time to read this, but it looks possibly relevant (it talks about the importance of whether an observation point is fixed in advance or not) and also possibly interesting, as it compares Bayesian and frequentist views.
I will read it when I have time later… or anyone else is welcome to if they have time/interest.
What I got out of the article above, since I skipped all the technical math, was that frequentists consider “the pointer problem” (i.e., just your usual selection bias) as something that needs correction while Bayesians don’t correct in these cases. The author concludes (I trust, via some kind of argument) that Bayesian’s don’t need to correct if they choose the posteriors carefully enough.
I now see that I was being entirely consistent with my role as the resident frequentist when I identified this as a “pointer problem” problem (which it is) but that doesn’t mean the problem can’t be pushed through without correction* -- the Bayesian way—by carefully considering the priors.
*”Requiring correction” then might be a euphemism for time-dependent, while a preference for an updateless decision theory is a good Bayesian quality. A quality, by the way, a frequentist can appreciate as well, so this might be a point of contact on which to win frequentists over.