Believing QI is the same as a Bayesian update on the event “I will become immortal”.
Imagine you are a prediction market trader, and a genie appears. You ask the genie “will I become immortal” and the genie answers “yes” and then disappears.
Would you buy shares on a Taiwan war happening?
If the answer is yes, the same thing should apply if a genie told you QI is true (unless the prediction market already priced QI in). No weird anthropics math necessary!
It is a good summary, but the question is can we generalize it the idea that “I am more likely to be born in the world where life extensions technologies are developing and alignment is easy”. Simple Bayesian update does not support this.
However, if the future measure can somehow “propagate” back in time, it increases my chances to be born in the world where there is logical chances for survival: alignment is easy.
A simple example of a world-model where measure “propagates” back in time is the simulation argument: if I survive in human-interested AI world, there will be many more my copies in the future who think that I am the past.
However, there could be more interesting ways for measure to propagate back in time. One of them is that the object of anthropic selection is not observer-moments, but the whole observer-timelines. Another is two-thirder solution to sleeping beauty.
“I am more likely to be born in the world where life extensions technologies are developing and alignment is easy”. Simple Bayesian update does not support this.
I mean, why not?
P(Life extension is developing and alignment is easy | I will be immortal) = P(Life extension is developing and alignment is easy) * (P(I will be immortal | Life extension is developing and alignment is easy) / P(I will be immortal))
Typically, this reasoning doesn’t work because we have to update once again based on our current age and on the fact that such technologies do not yet exist, which compensates for the update in the direction of “Life extension is developing and alignment is easy.”
This is easier to understand through the Sleeping Beauty problem. She wakes up once on Monday if it’s heads, and on both Monday and Tuesday if it’s tails. The first update suggests that tails is two times more likely, so the probability becomes 2⁄3. However, as people typically argue, after learning that it is Monday, she needs to update back to 1⁄3, which yields the same probability for both tails and heads.
But in the two-thirders’ position, we reject the second update because Tails-Monday and Tails-Tuesday are not independent events (as was recently discussed on LessWrong in the Sleeping Beauty series).
Believing QI is the same as a Bayesian update on the event “I will become immortal”.
Imagine you are a prediction market trader, and a genie appears. You ask the genie “will I become immortal” and the genie answers “yes” and then disappears.
Would you buy shares on a Taiwan war happening?
If the answer is yes, the same thing should apply if a genie told you QI is true (unless the prediction market already priced QI in). No weird anthropics math necessary!
It is a good summary, but the question is can we generalize it the idea that “I am more likely to be born in the world where life extensions technologies are developing and alignment is easy”. Simple Bayesian update does not support this.
However, if the future measure can somehow “propagate” back in time, it increases my chances to be born in the world where there is logical chances for survival: alignment is easy.
A simple example of a world-model where measure “propagates” back in time is the simulation argument: if I survive in human-interested AI world, there will be many more my copies in the future who think that I am the past.
However, there could be more interesting ways for measure to propagate back in time. One of them is that the object of anthropic selection is not observer-moments, but the whole observer-timelines. Another is two-thirder solution to sleeping beauty.
I mean, why not?
P(Life extension is developing and alignment is easy | I will be immortal) = P(Life extension is developing and alignment is easy) * (P(I will be immortal | Life extension is developing and alignment is easy) / P(I will be immortal))
Typically, this reasoning doesn’t work because we have to update once again based on our current age and on the fact that such technologies do not yet exist, which compensates for the update in the direction of “Life extension is developing and alignment is easy.”
This is easier to understand through the Sleeping Beauty problem. She wakes up once on Monday if it’s heads, and on both Monday and Tuesday if it’s tails. The first update suggests that tails is two times more likely, so the probability becomes 2⁄3. However, as people typically argue, after learning that it is Monday, she needs to update back to 1⁄3, which yields the same probability for both tails and heads.
But in the two-thirders’ position, we reject the second update because Tails-Monday and Tails-Tuesday are not independent events (as was recently discussed on LessWrong in the Sleeping Beauty series).