The Doomsday Argument is premised on the idea that one should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class. If we consider ourselves as randomly placed in the birth rank, then we should expect a doomsday event soon.
However, it seems that we should include the idea that one can reason about anthropics and maybe that one actually does reason about anthropics. Also, we could consider observer moments (OM) reasoning about anthropics. If we do this, then perhaps it is not AI that will kill us but enlighten us about our future or some central question about humanity. OM reasoning about anthropics could fall simply because nobody feels it necessary with superintelligence because all these sorts of questions are better answered through AI.
In principle, any observer should condition on everything they observe. Bounded rationality means this isn’t always useful, but it does suggest that deliberately ignoring things that might be relevant may cripple the ability to form good world models.
The usual Doomsday argument deliberately ignores practically everything, and is worth correspondingly little.
The Doomsday Argument is premised on the idea that one should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class. If we consider ourselves as randomly placed in the birth rank, then we should expect a doomsday event soon.
However, it seems that we should include the idea that one can reason about anthropics and maybe that one actually does reason about anthropics. Also, we could consider observer moments (OM) reasoning about anthropics. If we do this, then perhaps it is not AI that will kill us but enlighten us about our future or some central question about humanity. OM reasoning about anthropics could fall simply because nobody feels it necessary with superintelligence because all these sorts of questions are better answered through AI.
In principle, any observer should condition on everything they observe. Bounded rationality means this isn’t always useful, but it does suggest that deliberately ignoring things that might be relevant may cripple the ability to form good world models.
The usual Doomsday argument deliberately ignores practically everything, and is worth correspondingly little.
I think aturchin had a similar idea where people simply lose interest in the DA.