Wait a second, your objection doesn’t really strongly counter my point, right? ’Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn’t applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it’s being inconsistent. (I mean not necessarily, but still.) Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out. And your “rescue sim” argument doesn’t seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?
Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out.
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
Wait a second, your objection doesn’t really strongly counter my point, right? ’Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn’t applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it’s being inconsistent. (I mean not necessarily, but still.) Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out. And your “rescue sim” argument doesn’t seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
I think those possibilities are unlikely. /shrugs