The ASSA is the Absolute Self Selection Assumption. It is a variant on the Self Selection Assumption (SSA) of Nick Bostrom. The SSA says that you should think of yourself as being a randomly selected conscious entity (aka “observer”) from the universe. The Absolute SSA extends this concept to “observer moments” (OMs). An observer moment is one moment of existence of an observer’s consciousness. If we think of conscious experience as a process, the OM is created by dividing this process up into small units of time such that no perceptible change occurs within that unit. The ASSA then says that you should think of the OM you are presently experiencing as being randomly selected from among all OMs in the universe.
This is what I’m doing. I haven’t read the entire thing yet, but this paragraph basically explains the key idea of my model. I was going to address how to count instances eventually (near the end), and it bottoms out at observer moments. The full idea, abbreviated, is “start with a probability distribution over different universes, in each one apply the randomness thing via counting observer moments, then weigh those results with your distribution”. This gives you intuitive results in Doomsday (no update), P/P (some bias towards larger universe depending on how strongly you believe in other universes), Sleeping Beauty (basically 1⁄3) and the “how do we update on X-risk given that we’re still alive” question (complicated).
It appears that I independently came up with ASSA, plus a different way of presenting it. And probably a weaker formalism.
I’m obviously unhappy about this, but thank you for bringing it to my attention now rather than later.
One reason I was assuming there couldn’t be other theories I was unaware of is that Stuart Armstrong was posting about anthropics and he seemed totally unaware.
Yeah, I also had similar ideas for solving anthropics a few years ago, and was surprised when I learned that UDASSA had been around for so long. At least you can take pride in having found the right answer independently.
I think that UDASSA gives P(heads) = 1⁄2 on the Sleeping Beauty problem due to the way it weights different observer-moments, proportional to 2^(-description length). This might seem a bit odd, but I think it’s necessary to avoid problems with Boltzmann brains and the like.
You mean P(monday)? In that case it would be different although have some similarity. Why is the description length of the monday observer moment longer than the tuesday one?
No, I mean Beauty’s subjective credence that the coin came up heads. That should be 1⁄2 by the nature of a coin flip. Then, if the coin comes up tails, you need 1 bit to select between the subjectively identical states of waking up on Monday or Tuesdsay. So in total:
P(heads, Monday) = 1⁄2,
P(tails, Monday) = 1⁄4
P(tails, Tuesday) = 1⁄4
(EDIT: actually this depends on how difficult it is to locate memories on Monday vs. Tuesday, which might be harder given that your memory has been erased. I think that for ‘natural’ ways of locating your consciousness it should be close to 12/ 14 / 14 though)
(DOUBLE EDIT, MUCH LATER: actually it now seems to me like the thirder position might apply here, since the density of spacetime locations with the right memories is higher in the tails branch than the heads)
This is what I’m doing. I haven’t read the entire thing yet, but this paragraph basically explains the key idea of my model. I was going to address how to count instances eventually (near the end), and it bottoms out at observer moments. The full idea, abbreviated, is “start with a probability distribution over different universes, in each one apply the randomness thing via counting observer moments, then weigh those results with your distribution”. This gives you intuitive results in Doomsday (no update), P/P (some bias towards larger universe depending on how strongly you believe in other universes), Sleeping Beauty (basically 1⁄3) and the “how do we update on X-risk given that we’re still alive” question (complicated).
It appears that I independently came up with ASSA, plus a different way of presenting it. And probably a weaker formalism.
I’m obviously unhappy about this, but thank you for bringing it to my attention now rather than later.
One reason I was assuming there couldn’t be other theories I was unaware of is that Stuart Armstrong was posting about anthropics and he seemed totally unaware.
Yeah, I also had similar ideas for solving anthropics a few years ago, and was surprised when I learned that UDASSA had been around for so long. At least you can take pride in having found the right answer independently.
I think that UDASSA gives P(heads) = 1⁄2 on the Sleeping Beauty problem due to the way it weights different observer-moments, proportional to 2^(-description length). This might seem a bit odd, but I think it’s necessary to avoid problems with Boltzmann brains and the like.
You mean P(monday)? In that case it would be different although have some similarity. Why is the description length of the monday observer moment longer than the tuesday one?
No, I mean Beauty’s subjective credence that the coin came up heads. That should be 1⁄2 by the nature of a coin flip. Then, if the coin comes up tails, you need 1 bit to select between the subjectively identical states of waking up on Monday or Tuesdsay. So in total:
P(heads, Monday) = 1⁄2,
P(tails, Monday) = 1⁄4
P(tails, Tuesday) = 1⁄4
(EDIT: actually this depends on how difficult it is to locate memories on Monday vs. Tuesday, which might be harder given that your memory has been erased. I think that for ‘natural’ ways of locating your consciousness it should be close to 12/ 14 / 14 though)
(DOUBLE EDIT, MUCH LATER: actually it now seems to me like the thirder position might apply here, since the density of spacetime locations with the right memories is higher in the tails branch than the heads)