Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out.
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
I think those possibilities are unlikely. /shrugs