I guess I’m a bit out of the loop on questions about how to define uncertainty, so I’m a bit confused about what position you are against or how this is different from what others do. That is, it seems to be like you are trying to fix a problem you perceive in the way people currently think about uncertainty, but I’m not sure what that problem is so that I can even understand how this framing might fix it. I’ve been reading this sequence of posts thinking “yeah, sure, this all sounds reasonable” but also without really understanding the context for it. I know you did the post on anthropics, but even there it wasn’t really that clear to me how this framing helps us over what is perhaps otherwise normally done, although perhaps that reflects my ignorance of existing arguments about what methods of anthropic reasoning are correct.
Yeah, I wrote this assuming people have the context.
So there’s a class of questions where standard probability theory doesn’t give clear answers. This was dubbed anthropics or anthropic probability. To deal with this, two principles were worked out, SSA and SIA, which are well-defined and produce answers. But for both of them, there are problems where their answers seem absurd.
I think the best way to understand the problem of anthropics is by looking at the Doomsday argument as an example. Consider all humans who will ever live (assuming they’re not infinitely many). Say that’s N many. For simplicity, we assume that there are only two cases, either humanity goes extinct tomorrow, in which case N is about sixty billion – but let’s make that 1011 for simplicity – or humanity flourishes and expands through the cosmos, in which case N is, say, 1018. Let’s call S the hypothesis that humans go extinct, and L the hypothesis that they don’t (that’s for “short” and “long” human history). Now we want to update on P(L) given the observation that you are human number n (so n will be about 30 billion). Let’s call that observation O. Also let p be your prior on L, so P(L)=p.
The Doomsday argument now goes as follows. The term P(O|L) is 10−18, because if L is true then there are a total of 1018 people, each position is equally likely, so 10−18 is just the chance to get your particular one. On the other hand, P(O|S) is 10−11, because if S is true there are only 1011 people total. So we simply apply Bayes on the observation O, and then use the law of total probability in the demonimator to obtain
If p=0.999, this term equals about 0.00989. So even if you were very confident that humanity would make it, you should still assign just below 1% on that after updating. If you want to work it out yourself, this is where you should pause and think about what part of this is wrong.
So the part that’s problematic is the probability for P(O|L). There is a hidden assumption that you had to be one of the humans who was actually born. This was then dubbed the Self-Sampling Assumption (SSA), namely
All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.
So SSA endorses the Doomsday argument. The principled way to debunk this is the Self-Indexing Assumption (SIA), which says
All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.
If you apply SIA, then P(O|L)=P(O|S) and hence P(L|O)=P(O). Updating on O no longer does anything.
So this is the problem where SSA gives a stupid anwer. The problem where SIA gives the stupid answer is the Presumptuous Philosopher problem: there are two theories of how large the universe is, according to one it’s 109 times as large as it is according to the other. If you apply the SIA rule, you get that the odds for living in the small universe is 11+109 (if the prior was 12 on both).
There is also Full Non-indexical Conditioning which is technically a different theory, and it argues differently, but it outputs the same as SIA in every case, so basically there are just the two. And that, as far as I know, is the state of the art. No-one has come up with a theory that can’t be made to look ridiculous. Stuart Armstrong has made a bunch of LW posts about this recently-ish, but he hasn’t proposed a solution, he’s pointed out that existing theories are problematic. This one, for example.
I’ve genuinely spent a lot of time thinking really hard about this stuff, and my conclusion is that the “reason as if you’re randomly selected from a set of observers” thing is the key problem here. I think that’s the reason why this still hasn’t been worked out. It’s just not the right way to look at it. I think the relevant variable which everyone is missing is that there are two fundamentally different kinds of uncertainty, and if you structure your theory around that, everything works out. And I think I do have a theory where everything works out. It doesn’t update on Doomsday and it doesn’t say the large universe is 109 times as likely as the small one. It doesn’t give a crazy answer anywhere. And it does it all based on simple principles.
Does that answer the question? It’s possible that I should have started the sequence with a post that states the problem; like I just assumed everyone would know the problem without ever thinking about whether that’s actually the case.
I guess I’m a bit out of the loop on questions about how to define uncertainty, so I’m a bit confused about what position you are against or how this is different from what others do. That is, it seems to be like you are trying to fix a problem you perceive in the way people currently think about uncertainty, but I’m not sure what that problem is so that I can even understand how this framing might fix it. I’ve been reading this sequence of posts thinking “yeah, sure, this all sounds reasonable” but also without really understanding the context for it. I know you did the post on anthropics, but even there it wasn’t really that clear to me how this framing helps us over what is perhaps otherwise normally done, although perhaps that reflects my ignorance of existing arguments about what methods of anthropic reasoning are correct.
Yeah, I wrote this assuming people have the context.
So there’s a class of questions where standard probability theory doesn’t give clear answers. This was dubbed anthropics or anthropic probability. To deal with this, two principles were worked out, SSA and SIA, which are well-defined and produce answers. But for both of them, there are problems where their answers seem absurd.
I think the best way to understand the problem of anthropics is by looking at the Doomsday argument as an example. Consider all humans who will ever live (assuming they’re not infinitely many). Say that’s N many. For simplicity, we assume that there are only two cases, either humanity goes extinct tomorrow, in which case N is about sixty billion – but let’s make that 1011 for simplicity – or humanity flourishes and expands through the cosmos, in which case N is, say, 1018. Let’s call S the hypothesis that humans go extinct, and L the hypothesis that they don’t (that’s for “short” and “long” human history). Now we want to update on P(L) given the observation that you are human number n (so n will be about 30 billion). Let’s call that observation O. Also let p be your prior on L, so P(L)=p.
The Doomsday argument now goes as follows. The term P(O|L) is 10−18, because if L is true then there are a total of 1018 people, each position is equally likely, so 10−18 is just the chance to get your particular one. On the other hand, P(O|S) is 10−11, because if S is true there are only 1011 people total. So we simply apply Bayes on the observation O, and then use the law of total probability in the demonimator to obtain
P(L|O)=P(O|L)P(L)P(O)=10−18pP(O|L)P(L)+P(O|¬L)P(¬L)=10−18p10−18p+10−12(1−p)
If p=0.999, this term equals about 0.00989. So even if you were very confident that humanity would make it, you should still assign just below 1% on that after updating. If you want to work it out yourself, this is where you should pause and think about what part of this is wrong.
So the part that’s problematic is the probability for P(O|L). There is a hidden assumption that you had to be one of the humans who was actually born. This was then dubbed the Self-Sampling Assumption (SSA), namely
So SSA endorses the Doomsday argument. The principled way to debunk this is the Self-Indexing Assumption (SIA), which says
If you apply SIA, then P(O|L)=P(O|S) and hence P(L|O)=P(O). Updating on O no longer does anything.
So this is the problem where SSA gives a stupid anwer. The problem where SIA gives the stupid answer is the Presumptuous Philosopher problem: there are two theories of how large the universe is, according to one it’s 109 times as large as it is according to the other. If you apply the SIA rule, you get that the odds for living in the small universe is 11+109 (if the prior was 12 on both).
There is also Full Non-indexical Conditioning which is technically a different theory, and it argues differently, but it outputs the same as SIA in every case, so basically there are just the two. And that, as far as I know, is the state of the art. No-one has come up with a theory that can’t be made to look ridiculous. Stuart Armstrong has made a bunch of LW posts about this recently-ish, but he hasn’t proposed a solution, he’s pointed out that existing theories are problematic. This one, for example.
I’ve genuinely spent a lot of time thinking really hard about this stuff, and my conclusion is that the “reason as if you’re randomly selected from a set of observers” thing is the key problem here. I think that’s the reason why this still hasn’t been worked out. It’s just not the right way to look at it. I think the relevant variable which everyone is missing is that there are two fundamentally different kinds of uncertainty, and if you structure your theory around that, everything works out. And I think I do have a theory where everything works out. It doesn’t update on Doomsday and it doesn’t say the large universe is 109 times as likely as the small one. It doesn’t give a crazy answer anywhere. And it does it all based on simple principles.
Does that answer the question? It’s possible that I should have started the sequence with a post that states the problem; like I just assumed everyone would know the problem without ever thinking about whether that’s actually the case.
Could you explain why the Doomsday argument answer seems absurd, or why I don’t have to be a human who was actually born?
I think so, thanks.