Best shot at immortality?
What looks, at the moment, as the most feasible technology that can grant us immortality (e.g., mind uploading, cryonics)?
I posed this question to a fellow transhumanist and he argued that cryonics is the answer, but I failed to grasp his explanation. Besides, I am still struggling to learn the basics of science and transhumanism, so it would be great if you could shed some light on my question.
My guess as to why cryonics is the best method currently available is that it’s the best bet for keeping you potentially alive until better methods are developed.
Speaking just for myself, I’m not convinced that any method can give immortality (Murphy happens), but there’s a good bit of hope for greatly extended lifespans.
What we know about cosmic eschatology makes true immortality seem unlikely, but there’s plenty of time (as it were) to develop new theories, make new discoveries, or find possible new solutions. See:
Cirkovic “Forecast for the Next Eon: Applied Cosmology and the Long-Term Fate of Intelligent Beings”
Adams “Long-term astrophysical processes”
for an excellent overview of the current best-estimate for how long a human-complexity mind might hope to survive.
Just about everything CIrkovic writes on the subject is really engaging.
Cryonics is useful for preserving your body until a method for immortality is developed. It is not, on it’s own such a method.
If you die now, cryonics is the only method available to give you a chance at immortality.
Cryonics is either not an answer, or the only answer, depending on what exactly you mean by the question.
More importantly, cryonics is useful for preserving information. (Specifically, the information stored by your brain.) Not all of the information that your body contains is critical, so just storing your spinal cord + brain is quite a bit better than nothing. (And cheaper.) Storing your arms, legs, and other extremities may not be necessary.
(This is one place where the practical reasoning around cryonics hits ugh fields...)
Small tissue cryonics has been more advanced than whole-body. This may not be the case anymore, but certainly was say four years ago. So storing your brain alone gave you an improved bet at good information retention over storing the whole-body. I believe that whole-body methods have improved somewhat in the past few years, but still have a ways to go. Part of the problem lies in efficient perfusion of cryoprotectants through the body.
If you place credence on the possibility of ems, then you might consider investing in neuro-preservation. In that case, you wouldn’t need revival, only good scanning and emulation tech.
Edit: Also, I highly recommend the Alcor site. The resources there span the gamut from high level to detailed and there’s good coverage of the small tissue and cryoprotectant problems among other topics. http://www.alcor.org/sciencefaq.htm
If you are older you should definitely be focusing on strategies for biological life extension (calorie restriction, or whatever), and everyone should sign up for cryonics as an insurance policy.
Ultimately, with full molecular nanotechnology, whether the engineering of negligible senescence is biological or digital is rather beside the point (“What exactly do you mean by ‘machine’, such that humans are not machines?”—Eliezer Yudkowsky).
However, Unfriendly AI would render the whole point moot. So the most important thing is to guarantee we get Friendly AI right.
If you have a sufficiently high probability estimate of either FAI or UFAI arriving before your natural death.
Given cryonics, the point of death isn’t crucial for the significance of existential risks for personal survival.
It depends on your age. If you are under 30 it’s probably staying alive long enough until we reach longevity escape velocity.. If you are over 60 it’s probably cryonics.
Edited in response to Vladimir_Nesov’s comment.
Downvoted for the insane “almost certainly”.
Good point.
This isn’t really an answer. It’s kind of like telling someone about Moore’s law when they ask what the next big advance in processing will be.
Why not? Both have actionable implications and are falsifiable paths to immortality..
Directly answering the question, organ replacement including some brain augmentation that shifts into uploading eventually seems most likely. Cryonics isn’t really a direct answer to the question, if you want to talk about trajectories to achieve immortality.… It’s too hard to predict where a breakthrough will be made or a wall will be hit. I think the most feasible trajectory is focusing on money and power, then organ replacement and traditional life extension techniques, which include general existential risk reduction.
“Mind uploading” makes debatable assumptions about how the mind works. It might have the result of killing you while leaving behind a Siri-like app which tricks living people’s theory of mind into thinking that you have survived the upload.
Cryonics, by contrast, falls into the realm of testable neuroscience, as Sebastian Seung argues in his new book:
http://www.box.com/shared/static/pakcq9ffu2fral7r5uvd.png
I don’t think the cryonics of today are a very good bet for immortality, and mind-uploading may still take a while to develop… So I guess your best bet is to use the currently known methods to improve your life expectancy, and hope for the situation to improve during your lifetime (or even better, work on it!).
I’m not sure what criteria you’re intending with “feasible”, but I’d say FAI, as uploading/cryonics have a lot of failure modes, one of which is uFAI. Unless something weird happens, e.g. a currently-hidden AI keeps us from gobbling the stars, then an FAI once unleashed should be able to revive every human who’s ever died, so even if you die before it’s developed you should still be okay. (If an FAI would want to do that, anyway.) Whereas most people would be skeptical that an AI could be powerful enough to resurrect every human ever, I’m actually more skeptical that we’re not currently at the mercy of an AI or an entire coalition of AIs. Fermi paradox and what not. I’d say that there’s a lot of structural uncertainty, though, and that it would be unwise to put much faith in any hypotheses that involve highly advanced technology/intelligences.
There seems to be rather a lot of information lost beyond the chance of recovery. The mapping of ‘current world as best as the FAI could plausibly deconstruct’ to ‘possible histories that would lead to this state’ is not 1:1.
The best I could expect of an FAI is the ability to construct a probability distribution over all the likely combinations of humans who could have lived and perhaps ‘resurrect’ rather a lot of people who never lived on the hope that it’d get most the ones who did live in the process.
Not if there are AIs out there in the universe who can catch the information and run it back to your FAI at lightspeed. And since our FAI can catch information about the causal past of other AIs that they’d otherwise never have been able to get back, it’s even a clear-cut trade scenario. I see no reason not to expect this by default. (Steve’s idea; I think it’s pretty epic, especially if the part works where the AIs collectively catch all each others’ quantum entanglements which enables them to coordinate to reverse the past. That’d be freakin’ awesome. And either way, I think hearing this idea was the first time I thought, “wow, if a mere human can think of that, imagine what ideas a freaking superintelligence could come up with”.)
I’m not even confident there are other AIs out there in the universe. At least, not in our Everett-Branch-Future-And-Relevant-History-Light-Cone.
I’m not that confident, especially as I have a sneaking suspicion that something really weird is going on cosmologically speaking, but isn’t it the default assumption ’round these parts?
The default assumption is that there are (or will be) many other FAIs that light from our world-history will directly interact with? I didn’t know that. It’s certainly not mine. I thought it was more likely that we were for practical purposes alone. If you’ll pardon the shorthand reasoning:
Fermilike considerations… epically unlikely that life emerges all the way to superintelligence takeoff
Anthropics and self indication and suchforth… most EBs where one superintelligence emerges will not be branches in which more than one universe emerges.
There are heaps of other superintelligences, with high probability all the other ones are in parts of the (broadly used) Universe that are causally inaccessible.
The physics is complicated, often speculative (by folks like Tegmark) and beyond me but it all adds up to “we’re probably effectively alone in the universe as we see it”.
So you think there are other FAIs out there that our civilization would encounter if we got that far? How much does this depend on probability that we are in a simulation or under the benevolent (or otherwise) control of a powerful agent and how likely would you consider it to be conditional on us not being simulated/overseen?
So it’s possible that spacetime is infinitely dense and if you’re a superintelligence there’s no reason to expand. Dunno how likely that is, though blackholes do creep me out. Abiogenesis really doesn’t seem all that impossible, and anyway I think anthropic explanations are fundamentally confused. If your AI never expands then it can’t get precise info about its past, but maybe there are non-physical computational ways to do that, so the costs might not be worth the benefits. It seems like I might’ve been wrong in that LessWrong folk migh prefer anthropic solutons to Fermi, but I’m not sure how much evidence that is, especially as anthropics is confusing and possibly confused. So yeah… maybe 25% or so, but that’s only factoring in some structural uncertainty. Meh.
’Course, my primary hypothesis is that we are being overseen, and brains sometimes have trouble reasoning about hypothetical scenarios which aren’t already the default expectation. It’s at times like this when advanced rationality skills would be helpful.
I don’t follow. Do you think intelligences would loudly announce their existence over a long enough time period such that we would know about it? It always struck me as more likely that AGIs were quiet than that they didn’t exist. Remember, all those stars you see at night don’t necessarily exist; could just as easily be an illusion. All it’d take is for one superintelligence to show up somewhere and decide that we weren’t worth killing but that we shouldn’t get to see what’s actually going on as it gobbles all the unoccupied planets. There are various reasons it would want to do this. [ETA: The alternative, that abiogenesis is really difficult, strikes me as unlikely, and I have a very strong skepticism of anthropic “explanations”.]
Hm, this might be a difference of perspective; I’m not very confident in the simulation argument as it’s usually put forth. (I tried to explain some of my reasons elsewhere in this thread.)
(They don’t have to be Friendly, they just have to be willing to trade.) I don’t have a strong opinion either way. If we’re being overseen then it seems true by definition that we’ll run into other AGIs if we build an FAI, so I was focusing on the scenario where we’re not being overseen/simulated/fucked-with. In such a scenario I don’t know what probability to put on it… I’ll think about it more.
This seems likely but is not true by definition. In fact if I were designing and overseer I can see reasons why I may prefer to design one that keeps itself hidden except where intervention is required. Such an overseer, upon detecting that the overseen have created an AI with an acceptable goal system, may actively destroy all evidence of its existence.
True, mea culpa. I swear, there’s something about the words “by definition” that makes you misuse them even if you’re already aware of how often they’re misused. I almost never say “by definition” and yet it still screwed me over.
I keep running into people who think anthropic reasoning doesn’t explain anything, or that have it entirely backwards. One prominent physicist whose name eludes me commented in an editorial published in physics today that anthropic reasoning was worthless unless the life-compatible section of the probability distribution of universal laws was especially likely. This so utterly misses the point that he clearly didn’t understand the basic argument.
I’ve never encountered anyone who’s willing to admit to buying anything stronger than the weak anthropic principle, which seems utterly obviously true:
1) If the universe didn’t enable the formation of sapient life, it wouldn’t exist (edited to clarify: sapient life, not the universe). If the universe made the formation of such life fantastically unlikely in any one location but the extent of the universe is larger than the reciprocal of that probability density, it would likely exist.
2) Our existence thus doesn’t indicate much about the general hospitability of the rules of the universe to the formation of sapient life, because the universe is awfully large, possibly infinite.
3) In the event that the rules of the universe that we observe are consequences of more fundamental laws, and those fundamental laws are quantum mechanical in nature so that multiple variants get a nonzero component, then the probability of life forming in this universe is taken as the OR among all of those variants.
That’s really all there is to it...
Is there a typo in there? The simulation argument doesn’t seem to fit.
Oh, I was assuming… never mind, it’s probably not worth untangling.
Not as far as I can tell. What do you mean?
Sorry, my sentence was unclear; “it” was referencing the belief that at least one intelligence besides us has already shown up or will show up somewhere in the universe at some point. It seems to me that most people, including most people on LessWrong, think this is likely.
Is there any reason to think that such detailed information as would be needed to recreate people wouldn’t get lost in noise?
An expanding superintelligence sphere acts as a lightyears-wide optical lens, providing extremely redundant observations of far-off objects. This can be combined with superintelligent error-correction and image reconstruction. If you have multiple such superintelligences then you get even more angles. But yeah, I haven’t done the actual calculations; it’d be super cool if someone else did them.
On another note, about six months ago I spent a few days looking at the quantum information theory literature trying to figure out if AIs could coordinate to reverse the past; I think I have enough knowledge to pose it as a coherent question to someone with a lot of knowledge of reversible computing and QIT. I’d like to do that someday.
But the “error correction and image reconstruction” itself is not lossless. There is inevitable distortion caused by scattering off the unknown distribution of interstellar dust particles and from gravitational lensing between the AI and its target. Not to mention all the truly random crap happening in the interstellar void as the photons interact with the quantum foam. The inversion methods you suggest do not yield a true image, merely a consistent one.
It could still be enough to resurect a person, if the difference from truth is on the order of the difference between me right now and me after sleeping for a few years. (Hint: the two me’s are very different, but they’re still recognizable as me.)
I’m not a total expert, but try me.
I’ll have to spend a few hours reloading the concepts into my brain. When I do that I’ll post it to Discussion.
Does anyone have an estimate of how many actualy different humans there can be (i.e., the size of brain-space measured in units such that someone about one unit away from me would seem like the same person to someone who knows me).
It might be possible to simply create all humans that could have existed; those who actually did would be a subset, we just couldn’t tell which ones.
What is the recommended literature related to the ideas both you and wedrifid have been discussing in this thread? I googled but I figure it wouldn’t hurt to ask either. Thanks.
Which aspects? I think the only field relevant to what we were discussing is information theory. The stuff about superintelligences coordinating doesn’t have any existent literature, but similar ideas are discussed on LessWrong in the context of decision theory.
Your comment seems far-fetched. For one, an AI with such awesome powers could also choose to run a copy of you starting from any moment when you feel unhappy, not just the moment of your death. Since the universe around me stubbornly keeps on looking normal, something will probably stop “rescue sims” from happening.
I’m trying to avoid assuming a metaphysic where simulations are assumed to be possible, because I’m not sure such metaphysics ultimately make sense. (Maybe you can guess my rationale: I think “measure” and “existence” and so on are very fuzzy, and I think if we reason in terms of decision theoretic significant-ness then it might turn that running a simulation of something doesn’t double its “measure”, and what matters is what already “actually existed/exists”, i.e. what’s already “actually significant”.) If you don’t assume a simulationist metaphysic then “rescue sims” are dubious, whereas reviving people who are known to have already existed seems more like a straightforward application of technology. If you take a sort of common-sense layman’s perspective, reviving the dead sounds a lot less speculative than running an exact simulation of a mind on a computer in a way that will actually change the past. …No?
The layman’s perspective sounds reasonable enough, but seems to fall apart on closer inspection. What makes a human brain different from a simulation? Why would the AI have an easier time reconstructing the mind of someone who died on March 20 than reconstructing a copy of you on March 21? Why are future simulations of you necessarily less “significant” than current you? This looks suspiciously like a theory constructed specifically to be testable only by death, i.e. not testable to the rest of us.
(The following probably won’t be understandable / won’t appear motivated. Sorry.)
You can make a copy, but as soon as you simulate it diverging from the original then you’re imagining someone that never existed in a timeline that didn’t actually happen. Otherwise you’re just fooling yourself about what actually happened, you’re not causing something else to happen. Whereas if you revive a mind that died and have it have new experiences then you’re not deluding yourself about what actually happened, you’re just continuing the story.
Because the simulator would just be deluding themselves about what actually happened, like minting counterfeit currency; the important aspect of me is that I’m here embedded in this particular decision policy computation with these and such constraints. Take me out of my contexts and you don’t have me anymore. If you make a thousand crackfics involving Romeo running away to a Chinese brothel then nobody’s going to listen to your stories unless they have tremendous artistic merit. And if a thousand Romeo & Juliet crackfics are shouted out in the middle of a forest but nobody hears them, do they have any decision theoretic significance?
But I haven’t actually worked out the math, so it’s possible things don’t work like I think they do.
Well, it’s a theory about anthropics… quantum immortality is also a theory that is only testable by death, but I don’t think that’s suspicious as such. (In fact I don’t actually think quantum immortality is only testable by death, which you might be able to ascertain from what I wrote above, but um, I strongly suspect that I’m not understandable. Anyway, death is the simplest example.)
You might be on to something, but I can’t understand it properly until I figure out what “decision-theoretic significance” really means, and why it seems to play so nicely with both classical and quantum coinflips. Until then, “measure” seems to be a more promising explanation, though it has lots of difficulties too.
I don’t think this argument makes causal sense. If you’d been uplifted into a rescue sim, you certainly wouldn’t have made this post. The universe looks, from all of our perspectives, exactly like it would if rescue sims were possible, since that doesn’t currently make any testable preditictions. You(sub future) might see some differing effects, but that version of you isn’t around right now, and can’t provide evidence on the subject.
You’re right that my words don’t provide new evidence to you, but if you anticipate becoming a rescue sim at some point and that doesn’t happen, that’s evidence against rescue sims for you.
Even internally, no, that still doesn’t work. The evidence that your current continuity has observed is not influenced by whether or not rescue sims exist. That’s the same thing as saying that you have seen no evidence one way or the other. Even if multiple other versions of you are instantiated in the future, what the continuity of yourself that is typing this observes doesn’t change.
I don’t think that’s the right way to do Bayesian updating in the presence of observer-splitting.
Imagine I sell you a device that I claim to be a fair quantum coin. The first run of the device gives you 1000 heads in a row. You try again, and get another 1000 heads. You come back to my store to demand a refund, and I reply that my fair coin gives rise to many branches including this one, so you have nothing to complain about. Do you buy my explanation, or insist that the coin is defective?
I started to write a rebuttal, but it’s quickly becoming clear to me that I don’t have a systematic way of reasoning about this topic. I don’t necessarily agree with you, but I need to give the matter a lot more thought. Thank you for giving me something to think about.
My concern is basically that I’m profoundly uncomfortable with the idea of evidence flowing backwards in time. I mean, you’re updating your beliefs about the future based on what you haven’t seen happen in the future.
Wait a second, your objection doesn’t really strongly counter my point, right? ’Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn’t applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it’s being inconsistent. (I mean not necessarily, but still.) Also, if it doesn’t resurrect those in graves or urns then it’s not gonna resurrect cryonauts either, so cryonics is out. And your “rescue sim” argument doesn’t seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?
Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we’ll figure out how to thaw cryonauts in 100, so they get some bonus years.
I don’t think it’s a matter of an intelligence being strong or weak. I’m relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you’re familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.
Unless you’re conjecturing FAI powers so advanced that the modern understanding of information theory doesn’t apply, or unless I’m missing the point entirely.
I think those possibilities are unlikely. /shrugs
How do you define immortality?
No one’s mentioned spiritual options?
Pretty much every religion promises something after this life. Of course, these religions are probably false, but how probably? I think there must be at least a 2% chance that one of the religions is true.
If you actually believe then it would seem that the smart choice would be to dedicate all your attention to determining which religion has the greatest chance of being true (modified by how much you weight the consequences each religion provides.)
Trying to think about how a religion could be “true” is hurting my brain. (Is it true that the sun goes ’round the Earth? Yes and no...)
ETA: Holy crap, that’s the fastest downvote I’ve ever gotten. Couldn’t have been more than three seconds.
Well, if some chick actually got knocked up without getting laid and her kid raised some folks from the dead, transmogrified booze at parties then resurrected himself from death after getting executed then it’d start looking like one religion might be true.
You know, I always wondered about the semantics of “virgin Mary”. ’Cuz according to Jews rape doesn’t count, right? And a bunch of Roman soldiers were there around that time. And metaphysically speaking there’s nothing stopping the “Holy Spirit” taking the form of a Roman soldier...
Water into wine isn’t hard if you have a potent cannabis tincture. The resurrection, though… that seems hard to do without real magick.
Are you thinking of the old Jewish slander about a centurion knocking up Mary?
Not my downvote, but did you mean to assert geocentrism (sun round earth) rather than heliocentrism (earth round sun)?
Next thing you’re going to claim that centrifugal force and the Coriolis effect don’t exist.
No, Mr. Bond. I expect you to die.
Yes.
I suspect you would get a lot fewer downvotes if the options were [thumbs up], [thumbs down], and [???].
I did in my comment, just using different language. Replace “AI” or “simulator” or whatever with “God” or “god” or whatever and you get typical spiritual afterlife/resurrection claims. (Additionally, “spirit”/”spiritual” can be translated as “computation”/”computational” in some contexts.)
Does anyone know of any public fora who talk about cool things and that have high enough intelligence and general epistemic standards that comments like the above wouldn’t be downvoted?
Alternatively, does anyone know of any public fora who talk about cool things and that have high enough intelligence and general epistemic standards that comments like the above wouldn’t be downvoted?
I am very confused about why your comments are being downvoted, though I suspect Miley’s are being downvoted because of reasons like ‘we don’t want to privilege your silly hypothesis by even addressing it when there are 80 posts on LW that crush religious hypotheses into dust’, + ‘well-kept gardens die by pacifism’.
Though I strongly suspect that if MileyCyrus had properly signaled distaste for religious ideas and proposed prayer as a possible avenue or immortality without making an apparently high estimate of their probability, s/he wouldn’t have been downvoted.
I dunno, it’d still be a bad idea. There’s no chain of demonstrable causality leading from prayer to longevity.
Well, that would have been lying, so it can’t be very proper.
You think my 2% estimate was high? Richard Dawkins assigned theism approximately the same probability. I can understand if you think my confidence in atheism is low, but is so ludicrously low that it deserves 9 downvotes?
What probability would you assign theism?
This is disingenuous to the point of being dishonest. Reference:
..
Well, if we’re going to start dropping names, Eliezer would “be substantially more worried about a lottery device with a 1 in 1,000,000,000 chance of destroying the world, than a device which destroyed the world if the Judeo-Christian God existed.” It’s not the same hypothesis, but it’s close, and it’s stupid to use ethos so much anyway.
No, it’s not so low that it deserves 9 downvotes. The fact that it has received so many is disturbing.
I think I’ll side with the perspective advanced by Eliezer here:
“6.9 out of 7” is such a weird probability that I wonder if Dawkins just made it up on the fly or something.
It comes from his 7 point scale for measuring belief along the theist/atheist spectrum.
That makes sense. It still seems to be more of a rhetorical tool to illustrate that there is a spectrum of subjective belief. People tend to lump important distinctions like these together: “all atheists think they know for certain there isn’t a god” or “all theists are foaming at the mouth and have absolute conviction”, so for a popular book it’s probably a good idea to come up with this sort of scale like this, to encourage people to refine their categorization process. I kind of doubt that he meant it to be used as a tool for inferring Bayesian confidence (in particular, I doubt 6.9 out of 7 is meant to be fungible with P(god exists) = .01428).
Given that he’s pretty disposed to throwing out rhetorical statements, I’d say that’s a reasonable hypothesis. I’d be surprised if there was more behind it than simply recognizing that his subjective belief in any religion was ‘very, very low’, and just picking a number that seemed to fit.
Alternatively...
I asked my girlfriend if I should do one more, but she voted against. Something about “being obnoxious”.
Good call. I (guessed that) mine should be ok in as much as it it serves as acknowledgement. Conversation, not monologue.
I was neutral with respect to the original objection in as much as I thought you had a point regarding biased reception of the then parent but wasn’t comfortable with the snide framing.