Here’s one way of asking the question which does lead naturally to the Doomsday answer.
Consider two universes. They’re both infinite (or if you don’t like actual infinities, are very very large, so they both have a really huge number of civilisations).
In universe 1, almost all the civilisations die off before spreading through space, so that the average population of a civilisation through time is less than a trillion.
In universe 2, a fair proportion of the civilisations survive and grow to galaxy-size or bigger, so that the average population of a civilisation through time is much more than a trillion trillion.
Now consider two more universes. Universe 3 is like Universe 1 except that the microwave background radiation 14 billion years after Big Bang is 30K rather than 3K. Universe 4 is like Universe 2 again except for the difference in microwave background radiation. Both Universe 3 and Universe 4 are so big (or infinite) that they contain civilisations that believe the background radiation has temperature 3K because every measurement they’ve ever made of it has accidentally given the same wrong answer.
Here’s the question to think about.
Is there a sensible way of doing anthropics (or indeed science in general) that would lead us to conclude we are probably in Universe 1 or 2 (rather than Universe 3 or 4) without also concluding that we are probably in Universe 1 (rather than Universe 2)?
“How many copies of people like me are there in each universe?”
Then as long as your copies know that 3K has been observed, and excluding simulations and such, the answers are “(a lot, a lot, not many, not many)” in the four universes (I’m interpreting “die off before spreading through space” as “die off just before spreading through space”).
This is the SIA answer, since I asked the SIA question.
The difficulty is that, by construction, there are infinitely many copies of me in each universe (if the universes are all infinite) or there are a colossally huge number of copies of me in each universe, so big that it saturates my utility bounds (assuming that my utilities are finite and bounded, because if they’re not, the decision theory leads to chaotic results anyway).
So SIA is not an approach to anthropics (or science in general) which allows us to conclude we are probably in universe 1 or 2 (rather than 3 or 4). All SIA really says is “You are in some sort of really big or infinite universe, but beyond that I can’t help you work out which”. That’s not helpful for decision making, and doesn’t allow for science in general to work.
Incidentally, when you say there are “not many” copies of me in universes 3 and 4, then you presumably mean “not a high proportion, compared to the vast total of observers”. That’s implicitly the SSA reasoning being used for to discriminate against universes 3 and 4… but then of course it also discriminates against universe 2.
I’ve worked through pretty much all the anthropic approaches over the years, and they all seem to stumble on this question. All the approaches which confidently separate universes 3 and 4 also separate 1 from 2.
If we set aside infinity, which I don’t know how to deal with, then the SIA answer does not depend on utility bounds—unlike my anthropic decision theory post.
Q1: “How many copies of people (currently) like me are there in each universe?” is well-defined in all finite settings, even huge ones.
Incidentally, when you say there are “not many” copies of me in universes 3 and 4, then you presumably mean “not a high proportion, compared to the vast total of observers”
No, I mean not many, as compared with how many there are in universes 1 and 2. Other observers are not relevant to Q1.
I get that this is a consistent way of asking and answering questions, but I’m not sure this is actually helpful with doing science.
If, say, universes 1 and 2 contain TREE(3) copies of me while universes 3 and 4 contain BusyBeaver(1000) then I still don’t know which I’m more likely to be in, unless I can somehow work out which of these vast numbers is vaster. Regular scientific inference is just going to completely ignore questions as odd as this, because it simply has to. It’s going to tell me that if measurements of background radiation keep coming out at 3K, then that’s what I should assume the temperature actually is. And I don’t need to know anything about the universe’s size to conclude that.
Returning to SIA, to conclude there are more copies of me in universe 1 and 2 (versus 3 or 4), SIA will have to know their relative sizes. The larger, the better, but not infinite please. And this is a major problem, because then SIA’s conclusion it dominated by how finite truncation is applied to avoid the infinite case.
Suppose we truncate all universes at the same large physical volume (or 4d volume) then there are strictly more copies of me in universe 1 and 2 than 3 and 4 (but about the same number in universes 1 and 2). That works so far—it is in line with what we probably wanted. But unfortunately this volume based truncation also favours universe 5-1:
5-1. Physics is nothing like it appears. Rather the universe is full of an extremely dense solid, performing a colossal number of really fast computations; a high fraction of which simulate observers in universe 1.
It’s not difficult to see that 5-1 is more favoured than 5-2, 5-3 or 5-4 (since the density of observers like me is highest in 5-1).
If we instead truncate universes at the same large total number of observers (or the same large total utility), then universe 1 now has more copies of me (because it has more civilisations in total). Universe 1 is favoured.
Or if I truncate universes at the same large number of total copies of me (because perhaps I don’t care very much about people who aren’t copies of me) then I can no longer distinguish between universes 1 to 4, or indeed 5-1 to 5-4.
So either way we’re back to the same depressing conclusion. However the truncation is done, universe 1 is going to end up preferred over the others (or perhaps universe 5-1 is preferred over the others), or there is no preference among any of the universes.
These are valid points, but we have wandered a bit away from the initial argument, and we’re now talking about numbers that can’t be compared (my money is on TREE(3) being smaller in this example, but that’s irrelevant to your general point), or ways of truncating in the infinite case.
But we seem to have solved the finite-and-comparable case.
Now, back to the infinite case. First of all, there may be a correct decision even if probabilities cannot be computed.
If we have a suitable utility function, we may decide simply not to care about what happens in universes that are of the type 5, which would rule them out completely.
Or maybe the truncation can be improved slightly. For example, we could give each observer a bubble of radius 20 mega-light years, which is defined according to their own subjective experience: how many individuals do they expect to encounter within that radius, if they were made immortal and allowed to explore it fully.
Then we truncate by this subjective bubble, or something similar.
But yeah, in general, the infinite case is not solved.
My initial argument was really a question “Is there any approach to anthropic reasoning that allows us to do basic scientific inference, but does not lead to Doomsday conclusions?” So far I’m skeptical.
The best response you’ve got is I think twofold.
Use SIA but please ignore the infinite case (even though the internal logic of SIA forces the infinite case) because we don’t know how to handle it. When applying SIA to large finite cases, truncate universes by a large volume cutoff (4d volume) rather than by a large population cutoff or large utility cutoff. Oh and ignore simulations because if you take those into account it leads to odd conclusions as well.
That might perhaps work, but it does look horribly convoluted. To me it does seem like determining the conclusion in advance (you want SIA to favour universes 1 and 2 over 3 and 4, but not favour 1 over 2) and then hacking around with SIA until it gives that result.
Incidentally, I think you’re still not out of the woods with a volume cutoff. If it is very large in the time dimension then SIA is start going to favour universes which have Boltzmann Brains in the very far future over universes whose physics don’t ever allow Boltzmann Brains. And then SIA is going to suggest that not only are we probably in a universe with lots of BBs, we most likely are BBs ourselves (because almost all observers with exactly our experiences are BBs). So SIA calls for further surgery either to remove BBs from consideration or to apply the 4volume cutoff in a way that doesn’t lead to lots of Boltzmann Brains.
Forget about both SIA and SSA and revert to an underlying decision theory: viz your ADT. Let the utility function take the strain.
The problem with this is that ADT with unbounded utility functions doesn’t lead to stable conclusions. So you have to bound or truncate the utility function.
But then ADT is going to pay the most attention to universes whose utility is close to the cutoff … namely versions of universe 1,2,3,4 which have utility at or near the maximum. For the reasons I’ve already discussed above, that’s not in general going to give the same results as applying a volume cutoff. If the utility scales with the total number of observers (or observers like me), then ADT is not going to say “Make decisions as if you were in universe 1 or 2 … but with no preference between these … rather than as if you were in universe 3 or 4”
I think the most workable utility function you’ve come up with is the one based on subjective bubbles of order galactic volume or thereabouts i.e. the utility function scales roughly linearly with the number of observers in the volume surrounding you, but doesn’t care about what happens outside that region (or in any simulations, if they are of different regions). Using that is roughly equivalent to applying a volume truncation using regular astronomical volumes (rather than much larger volumes).
However the hack to avoid simulations looks a bit unnatural to me (why wouldn’t I care about simulations which happen to be in the same local volume?) Also, I think this utility function might then tend to favour “zoo” hypotheses or “planetarium” hypotheses (I.e. decisions are made as if in a universe densely packed with planetaria containing human level civilisations, rather than simulations of said simulations).
More worryingly, I doubt if anyone really has a utility function that looks like this ie. one that cares about observers 1 million light years away just as much as it cares about observers here on Earth, but then stops caring if they happen to be 1 trillion light years away...
So again I think this looks rather like assuming the right answer, and then hacking around with ADT until it gives the answer you were looking for.
Hi Stuart. It’s a while since I’ve posted.
Here’s one way of asking the question which does lead naturally to the Doomsday answer.
Consider two universes. They’re both infinite (or if you don’t like actual infinities, are very very large, so they both have a really huge number of civilisations).
In universe 1, almost all the civilisations die off before spreading through space, so that the average population of a civilisation through time is less than a trillion.
In universe 2, a fair proportion of the civilisations survive and grow to galaxy-size or bigger, so that the average population of a civilisation through time is much more than a trillion trillion.
Now consider two more universes. Universe 3 is like Universe 1 except that the microwave background radiation 14 billion years after Big Bang is 30K rather than 3K. Universe 4 is like Universe 2 again except for the difference in microwave background radiation. Both Universe 3 and Universe 4 are so big (or infinite) that they contain civilisations that believe the background radiation has temperature 3K because every measurement they’ve ever made of it has accidentally given the same wrong answer.
Here’s the question to think about.
Is there a sensible way of doing anthropics (or indeed science in general) that would lead us to conclude we are probably in Universe 1 or 2 (rather than Universe 3 or 4) without also concluding that we are probably in Universe 1 (rather than Universe 2)?
“How many copies of people like me are there in each universe?”
Then as long as your copies know that 3K has been observed, and excluding simulations and such, the answers are “(a lot, a lot, not many, not many)” in the four universes (I’m interpreting “die off before spreading through space” as “die off just before spreading through space”).
This is the SIA answer, since I asked the SIA question.
Thanks Stuart.
The difficulty is that, by construction, there are infinitely many copies of me in each universe (if the universes are all infinite) or there are a colossally huge number of copies of me in each universe, so big that it saturates my utility bounds (assuming that my utilities are finite and bounded, because if they’re not, the decision theory leads to chaotic results anyway).
So SIA is not an approach to anthropics (or science in general) which allows us to conclude we are probably in universe 1 or 2 (rather than 3 or 4). All SIA really says is “You are in some sort of really big or infinite universe, but beyond that I can’t help you work out which”. That’s not helpful for decision making, and doesn’t allow for science in general to work.
Incidentally, when you say there are “not many” copies of me in universes 3 and 4, then you presumably mean “not a high proportion, compared to the vast total of observers”. That’s implicitly the SSA reasoning being used for to discriminate against universes 3 and 4… but then of course it also discriminates against universe 2.
I’ve worked through pretty much all the anthropic approaches over the years, and they all seem to stumble on this question. All the approaches which confidently separate universes 3 and 4 also separate 1 from 2.
If we set aside infinity, which I don’t know how to deal with, then the SIA answer does not depend on utility bounds—unlike my anthropic decision theory post.
Q1: “How many copies of people (currently) like me are there in each universe?” is well-defined in all finite settings, even huge ones.
No, I mean not many, as compared with how many there are in universes 1 and 2. Other observers are not relevant to Q1.
I’ll reiterate my claim that different anthropic probability theories are “correct answers to different questions”: https://www.lesswrong.com/posts/nxRjC93AmsFkfDYQj/anthropic-probabilities-answering-different-questions
I get that this is a consistent way of asking and answering questions, but I’m not sure this is actually helpful with doing science.
If, say, universes 1 and 2 contain TREE(3) copies of me while universes 3 and 4 contain BusyBeaver(1000) then I still don’t know which I’m more likely to be in, unless I can somehow work out which of these vast numbers is vaster. Regular scientific inference is just going to completely ignore questions as odd as this, because it simply has to. It’s going to tell me that if measurements of background radiation keep coming out at 3K, then that’s what I should assume the temperature actually is. And I don’t need to know anything about the universe’s size to conclude that.
Returning to SIA, to conclude there are more copies of me in universe 1 and 2 (versus 3 or 4), SIA will have to know their relative sizes. The larger, the better, but not infinite please. And this is a major problem, because then SIA’s conclusion it dominated by how finite truncation is applied to avoid the infinite case.
Suppose we truncate all universes at the same large physical volume (or 4d volume) then there are strictly more copies of me in universe 1 and 2 than 3 and 4 (but about the same number in universes 1 and 2). That works so far—it is in line with what we probably wanted. But unfortunately this volume based truncation also favours universe 5-1:
5-1. Physics is nothing like it appears. Rather the universe is full of an extremely dense solid, performing a colossal number of really fast computations; a high fraction of which simulate observers in universe 1.
It’s not difficult to see that 5-1 is more favoured than 5-2, 5-3 or 5-4 (since the density of observers like me is highest in 5-1).
If we instead truncate universes at the same large total number of observers (or the same large total utility), then universe 1 now has more copies of me (because it has more civilisations in total). Universe 1 is favoured.
Or if I truncate universes at the same large number of total copies of me (because perhaps I don’t care very much about people who aren’t copies of me) then I can no longer distinguish between universes 1 to 4, or indeed 5-1 to 5-4.
So either way we’re back to the same depressing conclusion. However the truncation is done, universe 1 is going to end up preferred over the others (or perhaps universe 5-1 is preferred over the others), or there is no preference among any of the universes.
These are valid points, but we have wandered a bit away from the initial argument, and we’re now talking about numbers that can’t be compared (my money is on TREE(3) being smaller in this example, but that’s irrelevant to your general point), or ways of truncating in the infinite case.
But we seem to have solved the finite-and-comparable case.
Now, back to the infinite case. First of all, there may be a correct decision even if probabilities cannot be computed.
If we have a suitable utility function, we may decide simply not to care about what happens in universes that are of the type 5, which would rule them out completely.
Or maybe the truncation can be improved slightly. For example, we could give each observer a bubble of radius 20 mega-light years, which is defined according to their own subjective experience: how many individuals do they expect to encounter within that radius, if they were made immortal and allowed to explore it fully.
Then we truncate by this subjective bubble, or something similar.
But yeah, in general, the infinite case is not solved.
Thanks again for the useful response.
My initial argument was really a question “Is there any approach to anthropic reasoning that allows us to do basic scientific inference, but does not lead to Doomsday conclusions?” So far I’m skeptical.
The best response you’ve got is I think twofold.
Use SIA but please ignore the infinite case (even though the internal logic of SIA forces the infinite case) because we don’t know how to handle it. When applying SIA to large finite cases, truncate universes by a large volume cutoff (4d volume) rather than by a large population cutoff or large utility cutoff. Oh and ignore simulations because if you take those into account it leads to odd conclusions as well.
That might perhaps work, but it does look horribly convoluted. To me it does seem like determining the conclusion in advance (you want SIA to favour universes 1 and 2 over 3 and 4, but not favour 1 over 2) and then hacking around with SIA until it gives that result.
Incidentally, I think you’re still not out of the woods with a volume cutoff. If it is very large in the time dimension then SIA is start going to favour universes which have Boltzmann Brains in the very far future over universes whose physics don’t ever allow Boltzmann Brains. And then SIA is going to suggest that not only are we probably in a universe with lots of BBs, we most likely are BBs ourselves (because almost all observers with exactly our experiences are BBs). So SIA calls for further surgery either to remove BBs from consideration or to apply the 4volume cutoff in a way that doesn’t lead to lots of Boltzmann Brains.
Forget about both SIA and SSA and revert to an underlying decision theory: viz your ADT. Let the utility function take the strain.
The problem with this is that ADT with unbounded utility functions doesn’t lead to stable conclusions. So you have to bound or truncate the utility function.
But then ADT is going to pay the most attention to universes whose utility is close to the cutoff … namely versions of universe 1,2,3,4 which have utility at or near the maximum. For the reasons I’ve already discussed above, that’s not in general going to give the same results as applying a volume cutoff. If the utility scales with the total number of observers (or observers like me), then ADT is not going to say “Make decisions as if you were in universe 1 or 2 … but with no preference between these … rather than as if you were in universe 3 or 4”
I think the most workable utility function you’ve come up with is the one based on subjective bubbles of order galactic volume or thereabouts i.e. the utility function scales roughly linearly with the number of observers in the volume surrounding you, but doesn’t care about what happens outside that region (or in any simulations, if they are of different regions). Using that is roughly equivalent to applying a volume truncation using regular astronomical volumes (rather than much larger volumes).
However the hack to avoid simulations looks a bit unnatural to me (why wouldn’t I care about simulations which happen to be in the same local volume?) Also, I think this utility function might then tend to favour “zoo” hypotheses or “planetarium” hypotheses (I.e. decisions are made as if in a universe densely packed with planetaria containing human level civilisations, rather than simulations of said simulations).
More worryingly, I doubt if anyone really has a utility function that looks like this ie. one that cares about observers 1 million light years away just as much as it cares about observers here on Earth, but then stops caring if they happen to be 1 trillion light years away...
So again I think this looks rather like assuming the right answer, and then hacking around with ADT until it gives the answer you were looking for.