I have answers to all of these questions! I just haven’t posted them yet. If I present an entirely new theory in one super long post, then obviously no-one reads it. In fact, it would be irrational to read it because the prior that I’m onto something is just too low to invest the time. A sequence of short posts where each post makes a point which can be understood by anyone having read up to that post – that’s not optimal, but how else could you do it? This is a completely genuine question if you have an answer.
So the structure I’ve chosen is to first state the distinction, then lay out the model that deals with randomness only (because that already does some stuff which SIA and SSA can’t), then explain how to deal with ignorance, which makes the model complete, and then present a formalized version. The questions you just listed all deal with the ignorance part, the part that’s still in the pipeline.
Well, and I didn’t know I was competing with UDASSA, because I didn’t know it existed. For some reason it’s sitting at 38 karma, which makes it easy to miss, and you’re the first to bring it up. I’ll read it before I post anything else.
It’s true that UDASSA is tragically underrated, given that(it seems to me) it provides a satisfactory resolution to all anthropic problems. I think this might be a situation where people tend to leave the debate and move on to something else when they seem to have found a satisfactory position, like how most LW people don’t bother arguing about whether god exists anymore.
I think this might be a situation where people tend to leave the debate and move on to something else when they seem to have found a satisfactory position
Well not exactly, I came up with UDASSA originally but found it not entirely satisfactory, so I moved on to something that eventually came to be called UDT. I wrote down my reasons at against UD+ASSA and under Paul’s post.
Perhaps it would be good to have this history be more readily available to people looking for solutions to anthropic reasoning though, if you guys have suggestions on how to do that.
The solution to this kind of thing should be a wiki, I think. If the LessWrong wiki were kept up to date enough to have a page on anthropics, that would have solved the issue in this case and should work for many similar cases.
Right, I knew that many people had since moved on to UDT due to limitations of UDASSA for decision-making. What I meant was that UDASSA seems to be satisfactory at resolving the typical questions about anthropic probabilities, setting aside decision theory/noncomputability issues.
I agree it would be nice to have all this information in an readily-accessible place. Maybe the posts setting out the ideas and later counter-arguments could be put in a curated sequence.
I actually knew about UDT. Enough to understand how it wins in Transparent Newcomb, but not enough to understand that it extends to anthropic problems.
The ASSA is the Absolute Self Selection Assumption. It is a variant on the Self Selection Assumption (SSA) of Nick Bostrom. The SSA says that you should think of yourself as being a randomly selected conscious entity (aka “observer”) from the universe. The Absolute SSA extends this concept to “observer moments” (OMs). An observer moment is one moment of existence of an observer’s consciousness. If we think of conscious experience as a process, the OM is created by dividing this process up into small units of time such that no perceptible change occurs within that unit. The ASSA then says that you should think of the OM you are presently experiencing as being randomly selected from among all OMs in the universe.
This is what I’m doing. I haven’t read the entire thing yet, but this paragraph basically explains the key idea of my model. I was going to address how to count instances eventually (near the end), and it bottoms out at observer moments. The full idea, abbreviated, is “start with a probability distribution over different universes, in each one apply the randomness thing via counting observer moments, then weigh those results with your distribution”. This gives you intuitive results in Doomsday (no update), P/P (some bias towards larger universe depending on how strongly you believe in other universes), Sleeping Beauty (basically 1⁄3) and the “how do we update on X-risk given that we’re still alive” question (complicated).
It appears that I independently came up with ASSA, plus a different way of presenting it. And probably a weaker formalism.
I’m obviously unhappy about this, but thank you for bringing it to my attention now rather than later.
One reason I was assuming there couldn’t be other theories I was unaware of is that Stuart Armstrong was posting about anthropics and he seemed totally unaware.
Yeah, I also had similar ideas for solving anthropics a few years ago, and was surprised when I learned that UDASSA had been around for so long. At least you can take pride in having found the right answer independently.
I think that UDASSA gives P(heads) = 1⁄2 on the Sleeping Beauty problem due to the way it weights different observer-moments, proportional to 2^(-description length). This might seem a bit odd, but I think it’s necessary to avoid problems with Boltzmann brains and the like.
You mean P(monday)? In that case it would be different although have some similarity. Why is the description length of the monday observer moment longer than the tuesday one?
No, I mean Beauty’s subjective credence that the coin came up heads. That should be 1⁄2 by the nature of a coin flip. Then, if the coin comes up tails, you need 1 bit to select between the subjectively identical states of waking up on Monday or Tuesdsay. So in total:
P(heads, Monday) = 1⁄2,
P(tails, Monday) = 1⁄4
P(tails, Tuesday) = 1⁄4
(EDIT: actually this depends on how difficult it is to locate memories on Monday vs. Tuesday, which might be harder given that your memory has been erased. I think that for ‘natural’ ways of locating your consciousness it should be close to 12/ 14 / 14 though)
(DOUBLE EDIT, MUCH LATER: actually it now seems to me like the thirder position might apply here, since the density of spacetime locations with the right memories is higher in the tails branch than the heads)
I have answers to all of these questions! I just haven’t posted them yet. If I present an entirely new theory in one super long post, then obviously no-one reads it. In fact, it would be irrational to read it because the prior that I’m onto something is just too low to invest the time. A sequence of short posts where each post makes a point which can be understood by anyone having read up to that post – that’s not optimal, but how else could you do it? This is a completely genuine question if you have an answer.
So the structure I’ve chosen is to first state the distinction, then lay out the model that deals with randomness only (because that already does some stuff which SIA and SSA can’t), then explain how to deal with ignorance, which makes the model complete, and then present a formalized version. The questions you just listed all deal with the ignorance part, the part that’s still in the pipeline.
Well, and I didn’t know I was competing with UDASSA, because I didn’t know it existed. For some reason it’s sitting at 38 karma, which makes it easy to miss, and you’re the first to bring it up. I’ll read it before I post anything else.
It’s true that UDASSA is tragically underrated, given that(it seems to me) it provides a satisfactory resolution to all anthropic problems. I think this might be a situation where people tend to leave the debate and move on to something else when they seem to have found a satisfactory position, like how most LW people don’t bother arguing about whether god exists anymore.
Well not exactly, I came up with UDASSA originally but found it not entirely satisfactory, so I moved on to something that eventually came to be called UDT. I wrote down my reasons at against UD+ASSA and under Paul’s post.
Perhaps it would be good to have this history be more readily available to people looking for solutions to anthropic reasoning though, if you guys have suggestions on how to do that.
The solution to this kind of thing should be a wiki, I think. If the LessWrong wiki were kept up to date enough to have a page on anthropics, that would have solved the issue in this case and should work for many similar cases.
Right, I knew that many people had since moved on to UDT due to limitations of UDASSA for decision-making. What I meant was that UDASSA seems to be satisfactory at resolving the typical questions about anthropic probabilities, setting aside decision theory/noncomputability issues.
I agree it would be nice to have all this information in an readily-accessible place. Maybe the posts setting out the ideas and later counter-arguments could be put in a curated sequence.
I actually knew about UDT. Enough to understand how it wins in Transparent Newcomb, but not enough to understand that it extends to anthropic problems.
This is what I’m doing. I haven’t read the entire thing yet, but this paragraph basically explains the key idea of my model. I was going to address how to count instances eventually (near the end), and it bottoms out at observer moments. The full idea, abbreviated, is “start with a probability distribution over different universes, in each one apply the randomness thing via counting observer moments, then weigh those results with your distribution”. This gives you intuitive results in Doomsday (no update), P/P (some bias towards larger universe depending on how strongly you believe in other universes), Sleeping Beauty (basically 1⁄3) and the “how do we update on X-risk given that we’re still alive” question (complicated).
It appears that I independently came up with ASSA, plus a different way of presenting it. And probably a weaker formalism.
I’m obviously unhappy about this, but thank you for bringing it to my attention now rather than later.
One reason I was assuming there couldn’t be other theories I was unaware of is that Stuart Armstrong was posting about anthropics and he seemed totally unaware.
Yeah, I also had similar ideas for solving anthropics a few years ago, and was surprised when I learned that UDASSA had been around for so long. At least you can take pride in having found the right answer independently.
I think that UDASSA gives P(heads) = 1⁄2 on the Sleeping Beauty problem due to the way it weights different observer-moments, proportional to 2^(-description length). This might seem a bit odd, but I think it’s necessary to avoid problems with Boltzmann brains and the like.
You mean P(monday)? In that case it would be different although have some similarity. Why is the description length of the monday observer moment longer than the tuesday one?
No, I mean Beauty’s subjective credence that the coin came up heads. That should be 1⁄2 by the nature of a coin flip. Then, if the coin comes up tails, you need 1 bit to select between the subjectively identical states of waking up on Monday or Tuesdsay. So in total:
P(heads, Monday) = 1⁄2,
P(tails, Monday) = 1⁄4
P(tails, Tuesday) = 1⁄4
(EDIT: actually this depends on how difficult it is to locate memories on Monday vs. Tuesday, which might be harder given that your memory has been erased. I think that for ‘natural’ ways of locating your consciousness it should be close to 12/ 14 / 14 though)
(DOUBLE EDIT, MUCH LATER: actually it now seems to me like the thirder position might apply here, since the density of spacetime locations with the right memories is higher in the tails branch than the heads)