Philosophically, I would suggest that anthropic reasoning results from the combination of a subjective view from the perspective of a mind, and an objective physical view-from-nowhere.
Note that if you only use the “objective physical view-from-nowhere” on its own, you approximately get SIA. That’s because my policy only matters in worlds where Christopher King (CK) exists. Let X be the value “utility increase from CK following policy Q”. Then
E[X] = E[X|CK exists]
E[X] = E[X|CK exists and A] * P(A | CK exists) + E[X|CK exists and not A] * P(not A | CK exists)
for any event A.
(Note that how powerful CK is also a random variable that affects X. After all, anthropically undead Christopher King is as good as gone. The point is that if I am calculating the utility of my policy conditional on some event (like my existence), I need to update from the physical prior.)
That being said, Solomonoff induction is first person, so starting with a physical prior isn’t necessarily the best approach.
Reminds me of fully non-indexical conditioning; the probability that someone with your exact observations exists is in general higher in a universe with more population. SSA gets around this with “reference classes”, although it’s underdetermined how to construct one’s reference class.
I would also point out that FNC is not strictly a view-from-nowhere theory. The probability updates it proposes are still based on an implicit assumption of self-sampling.
Note that if you only use the “objective physical view-from-nowhere” on its own, you approximately get SIA. That’s because my policy only matters in worlds where Christopher King (CK) exists. Let X be the value “utility increase from CK following policy Q”. Then
E[X] = E[X|CK exists]
E[X] = E[X|CK exists and A] * P(A | CK exists) + E[X|CK exists and not A] * P(not A | CK exists)
for any event A.
(Note that how powerful CK is also a random variable that affects X. After all, anthropically undead Christopher King is as good as gone. The point is that if I am calculating the utility of my policy conditional on some event (like my existence), I need to update from the physical prior.)
That being said, Solomonoff induction is first person, so starting with a physical prior isn’t necessarily the best approach.
Reminds me of fully non-indexical conditioning; the probability that someone with your exact observations exists is in general higher in a universe with more population. SSA gets around this with “reference classes”, although it’s underdetermined how to construct one’s reference class.
EDIT: But also, see Stuart Armstrong’s critique about how it’s reflectively inconsistent.
Oh, well that’s pretty broken then! I guess you can’t use “objective physical view-from-nowhere” on its own, noted.
I would also point out that FNC is not strictly a view-from-nowhere theory. The probability updates it proposes are still based on an implicit assumption of self-sampling.