No point in arguing about definitions, though Neal also describes his argument as having a “doomsday” aspect (see page 41). On infinite universes, I have no problem with an a posteriori conclusion that the universe is (likely to be) finite; my problem is an a priori assumption that the world must be finite. Remember the prior probability of infinite worlds has to be zero to avoid FNC giving silly conclusions.
On a more technical point, I think I have spotted a difficulty with Neal’s distribution plots on page 39, and am not yet sure how much of a problem this is with his analysis.
Neal considers a parameter p where pM(w) is the probability (density) that someone with his exact memories appears in a particular region of spacetime w. This should be really tiny of course… if memories have 10^11 bits as Neal suggests then p would be something like 2^-(10^11). Neal then considers a parameter f where fA(v) is the probability of a species arising at a particular region v in spacetime preventing him existing with his memories… this is roughly the probability of that species expanding, what I called the probability p_e, multiplied by the proportion of the universe that the species expands into.
However, he then wants to normalise the plots so that (looking at the means in prior probability) we have log p + log f = 0; which initially seems impossible because it would force the prior mean of f up to about 2^10^11 whereas since f is a probability times a proportion, we must have f ⇐ 1.
Neal says we can compensate by rescaling the factors A(w) or M(w) e.g. we could perhaps decide to make M(w) something like 2^-(10^11) so that p is ~ 1. However this then requires his V parameter, obtained by integrating M(w)A(w) to be similarly like 2^-(10^11) i.e. we must have V very close to zero. So how can he consider cases with V = 0.1, V = 1 or V = 10 ? Something is wrong with the scaling somewhere.
I am simply not qualified to say anything insightful about the math point you make. The presumptuous philosopher doesn’t bother me, but that may just be scope insensitivity talking.
On the Great Filter, I agree that believing in the Filter and science discovering things like life is created easily (including complex life), Sol-like suns are common, and Earth-like planets occur often with Sol-like suns makes it seem likely that the Filter is in front of us. But it doesn’t say when. The Great Filter is consistent with The Crystal Spheres, which the anthropic Doomsday argument just isn’t.
No point in arguing about definitions, though Neal also describes his argument as having a “doomsday” aspect (see page 41). On infinite universes, I have no problem with an a posteriori conclusion that the universe is (likely to be) finite; my problem is an a priori assumption that the world must be finite. Remember the prior probability of infinite worlds has to be zero to avoid FNC giving silly conclusions.
On a more technical point, I think I have spotted a difficulty with Neal’s distribution plots on page 39, and am not yet sure how much of a problem this is with his analysis.
Neal considers a parameter p where pM(w) is the probability (density) that someone with his exact memories appears in a particular region of spacetime w. This should be really tiny of course… if memories have 10^11 bits as Neal suggests then p would be something like 2^-(10^11). Neal then considers a parameter f where fA(v) is the probability of a species arising at a particular region v in spacetime preventing him existing with his memories… this is roughly the probability of that species expanding, what I called the probability p_e, multiplied by the proportion of the universe that the species expands into.
However, he then wants to normalise the plots so that (looking at the means in prior probability) we have log p + log f = 0; which initially seems impossible because it would force the prior mean of f up to about 2^10^11 whereas since f is a probability times a proportion, we must have f ⇐ 1.
Neal says we can compensate by rescaling the factors A(w) or M(w) e.g. we could perhaps decide to make M(w) something like 2^-(10^11) so that p is ~ 1. However this then requires his V parameter, obtained by integrating M(w)A(w) to be similarly like 2^-(10^11) i.e. we must have V very close to zero. So how can he consider cases with V = 0.1, V = 1 or V = 10 ? Something is wrong with the scaling somewhere.
I am simply not qualified to say anything insightful about the math point you make. The presumptuous philosopher doesn’t bother me, but that may just be scope insensitivity talking.
On the Great Filter, I agree that believing in the Filter and science discovering things like life is created easily (including complex life), Sol-like suns are common, and Earth-like planets occur often with Sol-like suns makes it seem likely that the Filter is in front of us. But it doesn’t say when. The Great Filter is consistent with The Crystal Spheres, which the anthropic Doomsday argument just isn’t.