Tim, I had a look at the article on full non-indexical conditioning (FNC).
It seems that FNC still can’t cope with very large or infinite universes (or multiverses), ones which make it certain, or very nearly so, that there will be someone, somewhere making exactly our observations and having exactly our evidence and memories. Each such big world assigns equal probability (1) to the non-indexical event that someone has our evidence, and so it is impossible to test between them empirically under FNC.
See one of my earlier posts where I discuss an infinite universe model which has 1K background radiation, but a tiny minority of observers who conclude that it has 3K temperature (as their observations are misleading). FNC gives us no reason to believe that our universe is not like that i.e. no reason to favour the alternative model where background radiation actually is 3K. There seems something badly wrong with this as a reasoning principle.
To his credit, Neal is quite open about this, and proposes in effect to ignore such “big” worlds. In Bayesian terms, he will have to assign them prior probability zero, because otherwise FNC itself will drive their posterior probability up to almost one. However, in my view it is unreasonable to assign a consistent model universe (or class of models) prior probability zero, just because if you don’t then that messes up your methodology!
Another criticism is that, if I understand Neal’s article correctly, FNC creates a strong form of Doomsday argument anyway, though on somewhat different grounds. The reason is that if you restrict it to model universes of a finite size (say size of the observable universe) then FNC favours universes with a high density of civilizations of observers (ones where practically every star system gives rise to life and an intelligent civilisation). But then, to resolve Fermi’s paradox, each such civilisation must have an extremely small probability of ever expanding out of its home star system: it looks like we are forced to accept an expansion probability p_e < 10^-12 (or even < 10^-24 again). That’s hard to understand except through some sort of “Universal Doom” law, whereby technological civilisations terminate themselves before using their technology to expand.
So, like SIA, the attempt to avoid the DA seems to end up strengthening it.
I agree with your reading, but I do have a terminological nitpick.
I think that the thing you are calling the FNC-Doomsday argument is just a restatement of the Great Filter argument that is inherent in the Fermi paradox analysis. But I don’t think that the Great Filter argument necessarily implies imminent doomsday. For all we know, the Filter was behind us (i.e. life from non-life really is that unlikely). As evidence from science shows more and more of our precursors are relatively likely, the probability that the Great Filter is in front of us increases. But I don’t think this analysis gives us much insight into when Filter will happen.
By contrast, I think clearer communication results by limiting the label “Doomsday argument” to the class of ideas using anthropic reasoning to predict imminent cataclysm. I agree that most anthropic reasoning appears to suggest imminent doomsday—although I still agree with Neal that references classes are moral constructs and it is strange for different moral concepts to effect on empirical reasoning.
My pedantry for different labels feels a little like disputing definitions but I really am just trying to be more clear about what arguments are similar (or dissimilar to) other arguments. And I think the Great Filter has fewer implications than the Doomsday argument, which makes it profitable to treat them separately.
I understand your point about infinite universes, but I think the assertion is justified by empirical evidence. My understanding of the science is that there just doesn’t seem to be enough stuff out there for infinite universe to be a reasonable hypothesis.
I’ve been re-reading this thread, and think I’ve found an even bigger problem with FNC, even if it is just restricted to “small” finite universes.
As discussed above, in such universe models, FNC causes us to weight our probability estimates towards models with a high density of civilizations e.g. ones where practically ever star system that can gives rise to life and an intelligent civilization. And then, if we take our observations seriously, none of those civilizations can have ever expanded into our own solar system, so they must have a very low expansion probability. (Incidentally, even if they were blocked somehow from entering our own solar system, because of quarantine policy, Prime Directive, Crystal Spheres etc, and the block was somehow enforced universally and over geological timeframes, we still ought to see some evidence of them occupying other nearby star systems—radio emissions, large scale engineering projects, Dyson spheres etc.)
But the worse problem is that FNC creates an even stronger weight towards not taking our observations seriously in the first place. It seems there is even higher probability under FNC that some civilizations have expanded, have occupied the whole universe, and have populated it very densely with observers. We are then part of some subset of observers who have been deliberately “fooled” into thinking that the universe is largely unoccupied, and that we’re in a primitive civilization. Probably we’re in some form of simulation or experiment.
Unfortunately, I think this is all pretty devastating to FNC:
If infinite universes are allowed at all, then under FNC they will receive weighted probability very close to one.
If infinite universes are ruled out (prior probability zero), then very large finite universes will now receive weighted probability very close to one.
If very large finite universes are also ruled out (also probability zero), then skeptical hypotheses where the universe is not as it seems, and we are instead in some sort of simulation or experiment, receive weighted probability close to one.
If skeptical hypotheses are also ruled out (let’s give them probability zero as well) then we are still weighted towards universes with a high density of civilizations, and a “Great Filter” which lies in front of us. This still gives pretty doomerish conclusions.
So we have to do a lot of ad hoc tweaking to get FNC to predict anything sensible at all. And then what it does predict doesn’t sound very optimistic anyway.
No point in arguing about definitions, though Neal also describes his argument as having a “doomsday” aspect (see page 41). On infinite universes, I have no problem with an a posteriori conclusion that the universe is (likely to be) finite; my problem is an a priori assumption that the world must be finite. Remember the prior probability of infinite worlds has to be zero to avoid FNC giving silly conclusions.
On a more technical point, I think I have spotted a difficulty with Neal’s distribution plots on page 39, and am not yet sure how much of a problem this is with his analysis.
Neal considers a parameter p where pM(w) is the probability (density) that someone with his exact memories appears in a particular region of spacetime w. This should be really tiny of course… if memories have 10^11 bits as Neal suggests then p would be something like 2^-(10^11). Neal then considers a parameter f where fA(v) is the probability of a species arising at a particular region v in spacetime preventing him existing with his memories… this is roughly the probability of that species expanding, what I called the probability p_e, multiplied by the proportion of the universe that the species expands into.
However, he then wants to normalise the plots so that (looking at the means in prior probability) we have log p + log f = 0; which initially seems impossible because it would force the prior mean of f up to about 2^10^11 whereas since f is a probability times a proportion, we must have f ⇐ 1.
Neal says we can compensate by rescaling the factors A(w) or M(w) e.g. we could perhaps decide to make M(w) something like 2^-(10^11) so that p is ~ 1. However this then requires his V parameter, obtained by integrating M(w)A(w) to be similarly like 2^-(10^11) i.e. we must have V very close to zero. So how can he consider cases with V = 0.1, V = 1 or V = 10 ? Something is wrong with the scaling somewhere.
I am simply not qualified to say anything insightful about the math point you make. The presumptuous philosopher doesn’t bother me, but that may just be scope insensitivity talking.
On the Great Filter, I agree that believing in the Filter and science discovering things like life is created easily (including complex life), Sol-like suns are common, and Earth-like planets occur often with Sol-like suns makes it seem likely that the Filter is in front of us. But it doesn’t say when. The Great Filter is consistent with The Crystal Spheres, which the anthropic Doomsday argument just isn’t.
Tim, I had a look at the article on full non-indexical conditioning (FNC).
It seems that FNC still can’t cope with very large or infinite universes (or multiverses), ones which make it certain, or very nearly so, that there will be someone, somewhere making exactly our observations and having exactly our evidence and memories. Each such big world assigns equal probability (1) to the non-indexical event that someone has our evidence, and so it is impossible to test between them empirically under FNC.
See one of my earlier posts where I discuss an infinite universe model which has 1K background radiation, but a tiny minority of observers who conclude that it has 3K temperature (as their observations are misleading). FNC gives us no reason to believe that our universe is not like that i.e. no reason to favour the alternative model where background radiation actually is 3K. There seems something badly wrong with this as a reasoning principle.
To his credit, Neal is quite open about this, and proposes in effect to ignore such “big” worlds. In Bayesian terms, he will have to assign them prior probability zero, because otherwise FNC itself will drive their posterior probability up to almost one. However, in my view it is unreasonable to assign a consistent model universe (or class of models) prior probability zero, just because if you don’t then that messes up your methodology!
Another criticism is that, if I understand Neal’s article correctly, FNC creates a strong form of Doomsday argument anyway, though on somewhat different grounds. The reason is that if you restrict it to model universes of a finite size (say size of the observable universe) then FNC favours universes with a high density of civilizations of observers (ones where practically every star system gives rise to life and an intelligent civilisation). But then, to resolve Fermi’s paradox, each such civilisation must have an extremely small probability of ever expanding out of its home star system: it looks like we are forced to accept an expansion probability p_e < 10^-12 (or even < 10^-24 again). That’s hard to understand except through some sort of “Universal Doom” law, whereby technological civilisations terminate themselves before using their technology to expand.
So, like SIA, the attempt to avoid the DA seems to end up strengthening it.
I agree with your reading, but I do have a terminological nitpick.
I think that the thing you are calling the FNC-Doomsday argument is just a restatement of the Great Filter argument that is inherent in the Fermi paradox analysis. But I don’t think that the Great Filter argument necessarily implies imminent doomsday. For all we know, the Filter was behind us (i.e. life from non-life really is that unlikely). As evidence from science shows more and more of our precursors are relatively likely, the probability that the Great Filter is in front of us increases. But I don’t think this analysis gives us much insight into when Filter will happen.
By contrast, I think clearer communication results by limiting the label “Doomsday argument” to the class of ideas using anthropic reasoning to predict imminent cataclysm. I agree that most anthropic reasoning appears to suggest imminent doomsday—although I still agree with Neal that references classes are moral constructs and it is strange for different moral concepts to effect on empirical reasoning.
My pedantry for different labels feels a little like disputing definitions but I really am just trying to be more clear about what arguments are similar (or dissimilar to) other arguments. And I think the Great Filter has fewer implications than the Doomsday argument, which makes it profitable to treat them separately.
I understand your point about infinite universes, but I think the assertion is justified by empirical evidence. My understanding of the science is that there just doesn’t seem to be enough stuff out there for infinite universe to be a reasonable hypothesis.
I’ve been re-reading this thread, and think I’ve found an even bigger problem with FNC, even if it is just restricted to “small” finite universes.
As discussed above, in such universe models, FNC causes us to weight our probability estimates towards models with a high density of civilizations e.g. ones where practically ever star system that can gives rise to life and an intelligent civilization. And then, if we take our observations seriously, none of those civilizations can have ever expanded into our own solar system, so they must have a very low expansion probability. (Incidentally, even if they were blocked somehow from entering our own solar system, because of quarantine policy, Prime Directive, Crystal Spheres etc, and the block was somehow enforced universally and over geological timeframes, we still ought to see some evidence of them occupying other nearby star systems—radio emissions, large scale engineering projects, Dyson spheres etc.)
But the worse problem is that FNC creates an even stronger weight towards not taking our observations seriously in the first place. It seems there is even higher probability under FNC that some civilizations have expanded, have occupied the whole universe, and have populated it very densely with observers. We are then part of some subset of observers who have been deliberately “fooled” into thinking that the universe is largely unoccupied, and that we’re in a primitive civilization. Probably we’re in some form of simulation or experiment.
Unfortunately, I think this is all pretty devastating to FNC:
If infinite universes are allowed at all, then under FNC they will receive weighted probability very close to one.
If infinite universes are ruled out (prior probability zero), then very large finite universes will now receive weighted probability very close to one.
If very large finite universes are also ruled out (also probability zero), then skeptical hypotheses where the universe is not as it seems, and we are instead in some sort of simulation or experiment, receive weighted probability close to one.
If skeptical hypotheses are also ruled out (let’s give them probability zero as well) then we are still weighted towards universes with a high density of civilizations, and a “Great Filter” which lies in front of us. This still gives pretty doomerish conclusions.
So we have to do a lot of ad hoc tweaking to get FNC to predict anything sensible at all. And then what it does predict doesn’t sound very optimistic anyway.
No point in arguing about definitions, though Neal also describes his argument as having a “doomsday” aspect (see page 41). On infinite universes, I have no problem with an a posteriori conclusion that the universe is (likely to be) finite; my problem is an a priori assumption that the world must be finite. Remember the prior probability of infinite worlds has to be zero to avoid FNC giving silly conclusions.
On a more technical point, I think I have spotted a difficulty with Neal’s distribution plots on page 39, and am not yet sure how much of a problem this is with his analysis.
Neal considers a parameter p where pM(w) is the probability (density) that someone with his exact memories appears in a particular region of spacetime w. This should be really tiny of course… if memories have 10^11 bits as Neal suggests then p would be something like 2^-(10^11). Neal then considers a parameter f where fA(v) is the probability of a species arising at a particular region v in spacetime preventing him existing with his memories… this is roughly the probability of that species expanding, what I called the probability p_e, multiplied by the proportion of the universe that the species expands into.
However, he then wants to normalise the plots so that (looking at the means in prior probability) we have log p + log f = 0; which initially seems impossible because it would force the prior mean of f up to about 2^10^11 whereas since f is a probability times a proportion, we must have f ⇐ 1.
Neal says we can compensate by rescaling the factors A(w) or M(w) e.g. we could perhaps decide to make M(w) something like 2^-(10^11) so that p is ~ 1. However this then requires his V parameter, obtained by integrating M(w)A(w) to be similarly like 2^-(10^11) i.e. we must have V very close to zero. So how can he consider cases with V = 0.1, V = 1 or V = 10 ? Something is wrong with the scaling somewhere.
I am simply not qualified to say anything insightful about the math point you make. The presumptuous philosopher doesn’t bother me, but that may just be scope insensitivity talking.
On the Great Filter, I agree that believing in the Filter and science discovering things like life is created easily (including complex life), Sol-like suns are common, and Earth-like planets occur often with Sol-like suns makes it seem likely that the Filter is in front of us. But it doesn’t say when. The Great Filter is consistent with The Crystal Spheres, which the anthropic Doomsday argument just isn’t.