The fact that it is an existing technology does increase the relative importance of infrastructure and adoption versus research. But for a certain class of scenarios, research into damage-prevention is just as relevant as any other anti-aging tech (perhaps more relevant because it has a higher chance of being a factor in one’s own survival). This happens to be the class of scenarios popular among people who haven’t drunk the singularity Kool-Aid, so to speak.
If you talk to a typical biologist today, someone like Andrea Andreadis or PZ Myers, they will most likely tell you stuff like:
Cryopreservation damages too many cells for reanimation to be likely.
Ischemia does enough damage during the first few minutes to be very concerned about.
Uploading to a computer is probably not possible because human minds are not analogous enough to computers.
The brain is so tied in with other body systems that you’ll probably lose your identity if you only save the brain.
We folks with a computer science or engineering background tend to regard these claims with suspicion—maybe that indicates that there’s something we get that they don’t. But they are there, and probably for a reason. A billionaire not specializing in any science might be more rational to take these biologists at face value even if they are wrong.
I do believe singularity style events have significant probability. We certainly may well be uploading humans, designing FAI, and/or using molecular nanotech well within the next century. But I think it bears emphasis that we might not. And even if we do, significant information loss from suboptimal preservations may still be irreversible.
The fact that it is an existing technology does increase the relative importance of infrastructure and adoption versus research. But for a certain class of scenarios, research into damage-prevention is just as relevant as any other anti-aging tech (perhaps more relevant because it has a higher chance of being a factor in one’s own survival). This happens to be the class of scenarios popular among people who haven’t drunk the singularity Kool-Aid, so to speak.
If you talk to a typical biologist today, someone like Andrea Andreadis or PZ Myers, they will most likely tell you stuff like:
Cryopreservation damages too many cells for reanimation to be likely.
Ischemia does enough damage during the first few minutes to be very concerned about.
Uploading to a computer is probably not possible because human minds are not analogous enough to computers.
The brain is so tied in with other body systems that you’ll probably lose your identity if you only save the brain.
We folks with a computer science or engineering background tend to regard these claims with suspicion—maybe that indicates that there’s something we get that they don’t. But they are there, and probably for a reason. A billionaire not specializing in any science might be more rational to take these biologists at face value even if they are wrong.
I do believe singularity style events have significant probability. We certainly may well be uploading humans, designing FAI, and/or using molecular nanotech well within the next century. But I think it bears emphasis that we might not. And even if we do, significant information loss from suboptimal preservations may still be irreversible.