Brain Preservation
Most people, given the option to halt aging and continue in good heath for centuries, would. Anti-aging research is popular, but medicine is only minimally increasing lifespan for healthy adults. You, I, and everyone we know have bodies that are incredibly unlikely to make it past 120. They’re just not built to last.
But what are you, really? Your personality, your memories, they don’t leave you when you lose a leg. Lose most parts of your body and you’re still you. Lose your brain and that’s it. [1] You are a pattern, instantiated in the neurons of your brain. That pattern is sustained by your body, growing and changing as you learn and experience the world. Your body supports you for years, but it deteriorates and eventually isn’t up to the task any more. Is that ‘game over’?
Perhaps we could scan people’s brains at extremely high detail so we could run them in some sort of human emulator. This requires a thorough understanding of the brain, huge amounts of storage, unbelievably fast computers, and very detailed scanning. If it’s even possible, it may be several hundred years away.
Our bodies aren’t going to last that long, but what if we could figure out how to preserve our brains so that the information didn’t decay? Then, if the future turned out to be one in which we had advanced brain emulation and scanning technology, we could be revived. I don’t know if people in the future would want to spend the time or money to revive us, but in a future with technology this advanced, reviving a preserved brain as a computer simulation could be really cheap.
The most advanced technology for long-term tissue preservation today [2] is cryonics: freezing with vitrification. You add something to the blood that keeps ice crystals from forming and then freeze it. This is pretty much the same thing frogs do, hibernating frozen through the winter. The biggest organs that have been successfully brought back to working order after vitrification are rabbit kidneys, and the brain is a lot bigger and much more complex. While there are people applying this technique to human brains after death, it’s very much a one way street; we can’t revive them with current technology.
How much should it worry us that we can’t reverse this freezing process? If we’re already talking about revival via high-detail scanning and emulation, which is only practical after hundreds of years of technological development, does it matter that we can’t currently reverse it? The real question in determining whether vitrification is sufficient is whether we’re preserving all the information in your brain. If something critical is missing, or if something about our current freezing process loses information, the brains we think are properly preserved might be damaged or deteriorated beyond repair. Without a round trip test where we freeze and then revive a brain, we don’t know whether what we’re doing will work.
Another issue is that once you’ve frozen the brain you need to keep it cold for a few centuries at least. Liquid nitrogen is pretty cheap, but providing it constantly over such a long time is hard. Organizations fall apart: very few stay in business for even 100 years, and those that do often have departed from their original missions. Current cryonics organizations seem no different from others, with financial difficulties and imperfect management, so I don’t think 200+ years of full functioning is very likely.
Even if nothing goes wrong with the organization itself, will our society last that long? Nuclear war, ‘ordinary’ war, bioterrorism, global warming, plagues, and future technologies all pose major risks. Even if these don’t kill everyone, they might disrupt the cryonics organizations or stop technological development such that revival technology is never developed.
Taking all these potential problems and risks into account, it’s unlikely that you can get around death by signing up for cryonics. In attempts to calculate overall odds for success from estimated chances of each step I’ve seen various numbers: 1:3, 1:4, 1:7, 1:15 and 1:400. I’m even more pessimistic: I calculated 1:600 when I first posted to lesswrong and have since revised down to 1:1000. To some people the probability doesn’t matter, but because it’s expensive and there are plenty of other things one can do with money, I don’t think it’s obviously the sensible thing to do.
(I also posted this on my blog.)
[1] Well, lose your heart and you’re gone too. Except that we can make mechanical hearts and you stay the same person on receiving one. Not so much with a mechanical brain.
[2] Plastination is also an option, but it’s not yet to a point where we can do it on even a mouse brain.
- Cryonics As Untested Medical Procedure by 17 Jan 2014 16:36 UTC; 20 points) (
- Cryonics on LessWrong vs at LessWrong meetups by 2 Apr 2012 18:27 UTC; 10 points) (
- Cryonics as Charity by 10 Nov 2012 14:21 UTC; 6 points) (
- 15 May 2012 19:13 UTC; 5 points) 's comment on Neuroimaging as alternative/supplement to cryonics? by (
- Value of a Computational Process? by 9 Jul 2012 17:33 UTC; 4 points) (
In spare hours every month or three, I’ve been working on a questionnaire and calculation system to elicit background beliefs about cryonics so as to be able to capture people’s opinions about the value of cryonics (even if they are “I would pay money to avoid it because it sounds really cold”). The idea is to be able to give this instrument to lots of people and let a sort of “wisdom the the crowds” process work on each separate element of the value calculation and then be able to play some of the insights back for specific people to bring them into some kind of sane conclusion on the subject, fixing their “outlier beliefs” on particular questions by showing them information many other people thought lead to different conclusions. I totally do not have this in final form but it has helped me form more than normally educated opinions on the subject.
From what I can tell, the place I part company with people who have already signed up without substantial calculation was in the estimate of the long term value of post-resuscitation life. (Well, that and I don’t like the investment options normally used to pay for it, given other possible uses for my time and money. If life insurance options were held constant and 10-year 20% real return TIPS were for sale, it would probably be a slam dunk yes to get 10 year term cryo-policy and pour money into the TIPS.)
A lot of people seem to assign basically infinite value to their own post-resuscitation life. One basis for this is that they simply take continued existence as the most fundamental value there is, a sort of “there is no amount of money a person wouldn’t pay for a medical treatment that would give them some chance versus no chance” attitude. I don’t share this attitude. There are things I’d die for (thermodynamic obliteration death, not just heart/brain cessation), and some of these things could be accomplished by relatively prosaic amounts of money like $500 million, hence it seems clear to me that my life is probably worth less than this. To me, people who have an infinitely self-valuing attitude seem like immortalist versions of utility monsters, and so it doesn’t seem like a compelling pitch on a virtue ethical basis… though I imagine that someone could produce some sort of “poetry” to change my aesthetic responses somehow?
The other way people get an almost infinite value is to believe that intentional or negligent death would never happen post-resuscitation (no suicide, no murder, no accidents) and thus they expect to happily live for something like a trillion years, and any fraction of a chance at this is “worth it”. This argument seems somewhat plausible to me, and it is my basis for continuing to think that the value of information on cryonics is kind of high, especially if I can develop an inter-subjectively compelling instrument for spelling out the value calculation for many people that is consistent with this reason. It seems like a factual question and if the conclusion comes out a certain way, and people can be convinced of it by the iterated application of basically prosaic reasoning that as yet no one on the planet (including me) possesses, then the acquisition and deployment of this set of reasons would be worth trillions of years times billions of lives and it is one of the most important things that anyone currently alive could do, unless they can do something that entails this as a trivial consequence.
But my own gut feeling at this point is that there probably exists a small but non-negligible “probability that I’d commit suicide in year Y, for arbitrary dates” that would apply to me post-resuscitation unless my mind was forcibly modified by external processes. The ethical framework I’m partial to suggests that, in this case, my “mental self” would have become a mere puppet in the service of the manipulating entity (which was presumably racking up trivial-hedon-points (or some such)?)… but I’d actually prefer to die with some measure of personal integrity rather than end up in that kind of boring and moderately degrading infinite loop. I want something like “a self-directed life with narrative integrity”, not just repetition of some semi-mechanical notionally-point-worthy action. But if I don’t accept a self-modification to make suicide impossible, then the expectation curve for “me being alive N years after resuscitation because I still haven’t exercised my right to kill myself” probably becomes pretty small within a century or five (especially as my mind gets out into the zone past 130 where no human psychology has ever existed to permit interpolative base rate justification).
So at this point, the huge value of information argument still kind of applies and it is the thing that motivates me when I work on my questionnaire, but the thing I’m conceptually interested in is something like “speculative positive geronto-psychology” which is data-poor and hard to make progress in. What would a cognitively and spiritually happy 220 year old person be like? What would their intellectual interests be? Would they be impatient to reach 300 because the people that old seem world-wise and spiritually developed to them? Would it be creepy cradle robbing for them to be in a sexual relationship with someone who was merely 100 years old because the 100-year-old was so naive and vulnerable? Would they be utterly incapable of learning new technology or new skills because the the 25 year opportunity cost hit if they tried to change specialties was too much to handle?
It is now a stereotype for people to have a “mid life crisis” somewhere between 38 and 55. Is there something else with similarly noticeable internal structure (that would acquire a name once it started happening and people started noticing it) that lurks in human souls waiting to trigger somewhere between 138 and 182, like a “sesquicentennial crisis” or something? If there is such a thing, how many people decide to kill themselves over it? Maybe a set of inferences click into place at some point and everyone you might naively want to stay alive realizes: “Oh yeah. Nihilism. Duh… That was ironic” and they pull the plug. Presuming a mathematical model of psychological processes that justifies very long term psychological optimism seems naive. It could be I just haven’t found the right approach yet, but it seems like post resuscitation life expectancies of 300 years are vastly more plausible than post resuscitation life expectancies of 300 thousand years (and 300k really would justify a cryo-policy that had a 0.1% chance of working… it just seems unlikely to me that this is how things would play out).
So anyway, I don’t have any particular answer here. I’m still working on it. But my questions are no longer the ones I normally hear other people being interested in so I thought I’d gloss the issues here where they might get eyeballs to see if this causes other people to update or they have some kind of interesting feedback. Maybe my peculiar brand of virtue ethics is naive? Maybe I “should want” to become a subjective utility monster? Maybe being a hedon-puppet isn’t as ugly it sounds to me? Maybe I don’t really have a solid grasp of how to value a single object sampled from an exponential decay function?
Maybe I should just re-read Permutation City? :-P
I’m still uncertain about a lot of this. The value of information around the general issue still seems pretty high to me. But at the same time it seems to me that the people who have already signed up did so with inadequate philosophically-grounded financial justification. There are still many terrible reasons to not sign up that seem to be blocking most normal people, I’m not denying that… but I have never heard a clean and thorough positive case for cryonics on its own terms without having to stoop to address the many many quibbles raised by people (1) with major cognitive dissonance on the general subject of death and/or who are (2) in the grip of an ugh field around end-of-life planning. Just because many of those quibbles are wrong doesn’t mean the conclusion they argue against is right.
Share a draft? I’d be happy to help. My cryonics probabilities spreadsheet is an attempt in a similar direction.
Interesting and nearly impossible subject
The odds a race of people ,living on a planet over populated as it is, will take the time effort and money to re-animate thousands/millions of human beings (or brains ) simply because those from the past “paid” for them to do so? Has a probability rate of nearly ZERO!
The idea that people actually pay to possibly (and unlikely) be re-animated in any form? Makes one believe that they are not only without understanding of how the life and death process works but also frighteningly unstable.
Once one is dead? They are DEAD. There are legitimate reasons for this that involve such things as the malfunction of organs etc.
Don’t believe congress in the future (if there even is one) will be arguing over proposed bills to help fund the medical procedures necessary to successfully re-animate human beings from the past. Wont happen.
Also the idea of bringing people/brains back to life without having to perform medical procedures in order to keep said subject alive? I personally do not believe would be possible for many, many hundreds of years if at all !
Meaning: you died for a reason and if they don’t fix it? You aren’t going to be resurrected successfully. There will be no star trek beaming device that resurrects the dead and if so expect hundreds if not thousands of years to pass before such a device is invented and tested to be successful.
All utter hogwash in my opinion.
For the sake of argument if said device was invented: What will the failure rate be by the way? Ever think about that?
Where will one live once re-animated? Where will one work? Where will ones brain be placed?
Will homeless shelters be erected for the re-animated? Or will the few “lucky ones” simply be used as lab rats once they are brought back to “life”? hhhmmm
Also, whose going to re-animate all of these people/brains and what are they going to be paid by the hour to do so? Do they receive benefits? Will they be offered insurance and 2 weeks vacation every year? LOL.
“Honey I’m home! my gosh what a tough day at the re-animation lab today!” How will those brought back to life be fed/nourished? Will they be kept in institutions? Or simply left to walk the streets jobless, hopeless vagrants?
Will there be a re-animators union run by the teamsters?
This is insanity people!
By the way your $100,000 dollars today will be worth about $1,000 tomorrow. In other words: You probably couldn’t spend enough money today to be successfully re-animated in the future. After all inflation IS an issue is it not?
This scenario would be similar to Ben Franklin paying $500 to be cryogenically frozen in 1782 so he could be unfrozen in 2322.
Do you honestly believe his $500 (useless old world dollars) would be enough to pay for his re-animation process in the future? The answer is a resounding NO. Our paper money will be worth nothing in the future as has been the case with most outdated over circulated currency throughout history.
Wake up people, honestly.
The only human beings they may ever consider re-animating in the future would most likely be those who made a phenomenal impact on society and on the world in general. (i.e. world renowned physicists, astronomers, Nobel laureates and possibly leaders of nations depending upon their resumes).
Even this seems highly unlikely due to the cost, man hours etc. involved and even they would most likely be reduced to lab rats or specimens to be used by future scientists. Possibly a life spent in a plexi-glass enclosure for future human beings to gaze at in awe from 9-5pm?
If you honestly believe they (whoever they may be) ,in the distant future, would take the time effort and money to re-animate your average joe? Excuse me while I shoot coffee through my nose in uncontrollable fits of laughter !
what evolved society does such a thing unless it possesses value of some sort? How are you or your brain going to make it in a future of smarter, stronger and more advanced human beings? Will you be reduced to the local freak show at the future carnival? “Come see the ignorant weakling from the past folks: Resurrected Man!”
Someone like ohhh lets say: John Jones dollar store chain and BBQ restaurant magnate of Kentucky OR Jill Holloway trust fund baby of the Wal Mart clan? Probably wouldn’t make the cut unless of course they needed another guinea pig from the past to study. So those of you with questionable resumes don’t count on being resurrected. We don’t invest in bums today why would we in the future?!
If you pay for such a service? You are a naive and emotionally disconnected human being of the highest order plain and simple. 2+2=4 people. This is not rocket science.
Obviously some human beings self importance, fear of death and narcissism knows no bounds (OR any form of sanity).
If you buy into this obvious scam? I have a bridge in Brooklyn I’d like to sell you as well …
I daresay people do think about that. But look at it this way: What’s the failure rate for revivification after cremation? What’s the failure rate for revivification after burial? I personally believe that these techniques have a potentially non-zero revivification rate (we don’t know for certain that we can’t work backwards from aggregate environmental data), but even so, freezing the brain whole is going to give us success probabilities which are orders of magnitude higher.
Speculating on future economics is less fruitful than speculating on future motivation. Here’s an emotionally moving argument, whether or not it’s accurate:
The economic and motivational behaviors of a society are flexible. Societies have existed which are motivated by scientific curiosity or concern for the well-being of their fellow man. Such societies are more likely to resurrect a cryonaut than a society which is motivated purely by selfish profit. This is a good thing, as it means that I am more likely to resurrect into a society which shares my motivational values, than one which holds motivational values which I find contemptuous.
This is, again, an emotional rather than a factual argument—you are attempting to force people’s imagination to conjure familiar images of present-day economic working conditions. Statistically, the future is very unlikely to look like the present—note that the present, at any given point in history, looked very unlike its own past.
Again with the exclamation points. Also, mentioning union politics automatically hooks into people’s political pack instincts, which degrades rational reasoning abilities. People here know this, so they tend to discount arguments which attempt to exploit such hooks.
Are you certain you understand how compound interest and investment work? This is directly from Wikipedia:
Also, are you certain that you know how future economies will operate? “Money” may not necessarily continue to be a prime motivator.
Continuing to show contempt for your audience is an excellent dominance display, but very poor at engaging rational thinking skills. Again—this site is about rational thinking skills. You will not do well to play primate-level affect-manipulation games here. If you want to perform affect-manipulation games here, you need to appeal to higher order emotional responses (pride in the reader’s intelligence, for example, or wonder and excitement at future possibility).
This is actually a potentially valid argument, which (along with your “what will the failure rate be?” point) merits serious consideration. Historically, technologies are not fairly distributed. People should be having better discussions about the fairness of transhuman/post-singularity technologies. Your voice could lend valid input to that process, if you could learn to speak more clearly about it.
That seems patently absurd. There are MULTIPLE famous people from the past, that Hollywood has spent millions of dollars creating the illusion of “resurrecting”. If it were possible to legitimately resurrect famous people, I daresay some reality / news commentary channel would be willing to spend millions of dollars per head just to resurrect them, for no other reason than to make a guest panel for some commentary show.
This argument actually has a kernel of truth and importance to it, which you utterly buried in contempt for your audience. Why are you deliberately weakening your own argument, when you have important things to say? Nevermind your failure to respect your audience; you are failing to respect yourself. Why are you doing this? What you are saying is too important to cover in shit like this.
Why do you believe that attacking an argument necessitates denigrating the arguer? Facts should stand on their own merits; why are you feeling the need to repeatedly insult the people you are communicating with?
I’m afraid I don’t follow this at all; each sentence seems to have no connection to the previous, other than to convey a vague emotional sense of superiority, hostility and disdain. In what way do you believe that will help you “win” this argument?
I would tend to agree.
You appear to be confused; how can I help?
Hello! I’m going to expend karma to reply to you, twice (my response is too long to fit into one comment). I consider this worth it, on the off chance that I can impart knowledge to you that you will find useful. Please accept this as a genuine gesture of goodwill.
Leading in with this statement is an emotional tactic. People on this site are far better trained to recognize emotional tactics than they are to respond to them. Rather than flagging “I have high status and a willingness to inflict coercive force”, your verbal hostility and boldness merely flag “I am attempting to subvert a rational conversation by playing to my audience’s dominance instincts”. This is a poor tactic to choose here, because people here are explicitly training themselves to not respond favorably to such tactics. Paradoxically, many people here actually respond negatively to such tactics, rather than ignoring them, because using those tactics indicates that you are not part of this community (i.e., not “pack”).
Nothing has a probability rate of zero. If you wish to use this community’s language (and it’s always a good tactic to use the language of your audience), you should rather say “has a probability rate of epsilon”. Also, capitalizing ‘ZERO’ and putting an exclamation point on the end connotes that you are speaking with a level of fervor and passion that most people here do not respond well to, when used to punctuate a numerical fact. Most people appear to respond better in this community, in my observations thus far, if you state your facts with as flat an implied affect as possible. Reserve your passion for your surprising conclusions; most people here respond very favorably to labile displays when expressing surprise—it seems to imply that you are excited at the opportunity to learn something new.
As a follow-up, if you actually state that something has a probability rate of epsilon, be prepared to have people challenge those numbers. This is a VERY math-based community.
Calling your audience stupid only works if you are pulling on their dominance strings. Most people here respond rather poorly to having their dominance triggers manipulated this crassly. Also, you said “unlikely” in parentheses, immediately after declaring in bold terms that the probability is ZERO! in the previous statement. This introduces stress fractures into your argument, as “unlikely” has a very different emotional connotation than ZERO! - it makes it easy for someone to respond with “which is it? Unlikely, or ZERO! probability? These are very different things.”
Regrettably, the definition of “death” is not so cut-and-dry. Anyone who has researched medical ethics is familiar with the basic arguments, but ultimately, it comes down to the fact that brain death happens separately from organ death, happens separately in different portions of the brain than in other portions of the brain, and that revival is possible—although statistically less and less likely—at almost any point in the process, even with current technology. We can assume that future technology will be better than current technology, of course, barring some kind of information-destroying disaster.
In the time-scales we’re talking about, “congress” may not even be a valid target for speculation anymore. Even if they are, politics in this country have vastly changed over the past 40 years, 80 years, 120 years, 160 years, etc.; there is no reason to doubt that they will continue to do so.
(continued...)
The pattern instantiated in the set of neurons that is currently typing this did not exist ten years ago, at least not if the pattern is specified to a certain level of precision (1). So if I am that pattern, I did not exist ten years ago. Nor will I exist in ten years. Nor do I wish to exist in ten years, except in the sense that I would not mind continuing to exist indefinitely as a backup, though I would very much prefer not to experience that kind of existence.
To my way of thinking, I am a nonrecurring pattern of interactions instantiated in a constantly changing system, some of which is in my brain, some of which is in other people’s brains, and some of which is in the physical world around me, including but not limited to my body. “Preserving me” is consequently not a well-defined operation; it’s very hard to say what it means to “preserve” a nonrecurring pattern.
That said, some things definitely seem to qualify as “preserving me” better than others. Preserving the information content of my brain and using it to reconstruct a dynamic entity which has the same memories and habits of thoughts that my brain does now, and embedding that entity in an environment with which it can interact in ways that it experiences as importantly similar to the ways in which my brain currently experiences its interactions with its environment, is a pretty good approximation of “preserving me”.
To my mind, if I value “preserving me” (which I often do), I do best to evaluate various things that might hypothetically exist in the future in terms of how closely they approximate “me”, then do the math. I may well prefer a million things that are .8me to one thing that is .99me. (Or I may not; I don’t mean to suggest that my preference for an approximation of me existing is linear with how well it approximates me, or with how many of it there are. Indeed, I doubt that’s true.)
======
(1) Conversely, if I relax that level of precision sufficiently, that pattern (and, therefore, I) existed ten years ago… but it’s not clear to me why I should stop there. If I relax it a little further, I exist in hundreds of other bodies right this minute. Relax it a smidge further and there are seven billion of me currently occupying this planet.
Upvoted for this. Exactly what I feel about statements that people are their brains. (Hofstadter makes a similar point in I Am a Strange Loop.)
This has been mentioned before, but are you taking the positive externalities of cryonics into account?
Specifically, signing up for cryonics increases the visibility and probably the credibility of cryonics. Consider also that cryonics is so tiny that one additional member has a relatively large impact.
Many of your objections to cryonics are based on the world of today, where cryonics is weird and marginalized. Have you tried recalculating your probability of cryonics success in a hypothetical world where cryonics is normal?
I think the most likely path to a world of normal cryonics is through individual signups. And I consider that world to be valuable enough to pay for a small chance of bringing it into existence.
I’m not taking them into account, but mostly because I think they’re small. The cryonics wants to be big article is only concerned with the storage costs, which I think are not a big issue. The big cost currently is in preparation for freezing, not the keeping cold. (Freezing and preparation is ~$100K while long term is only ~$100/year (so $10K if you can get 1% real return).
Hanson’s article brings up more issues, which I think is good.
Even if cryonics became massively popular, however, we’d still have several problems: we don’t know if we’re preserving all the information, we don’t know if it will ever be possible to extract the information, we might kill ourselves off first, it might never be cheap enough to revive a significant fraction of those frozen. These concern me enough that I think inducing others to sign up might actually be negative because it’s redirecting money from current spending to what I think is probably a waste.
The way I see it becoming normal starts with adoption by the scientific minded rational atheists who are it’s most natural audience. This group is going to be (rightly) skeptical unless we can do a freeze/revive round trip on a person. (EDIT: I’ve backed off from this claim some downthread.) I don’t see us getting to a world where cryonics is normal just by signing up and encouraging friends to do so. What I see as most promising, actually, is the continuation of low temperature medical research that has no intended cryonics application.
I don’t think that is reasonable at all. Scientific minded rational atheists who want to be the gatekeepers on this matter should be basing their opinions on the entirety of the available relevant data, not (the equivalent of) whether video tapes of monkeys turning into humans can be produced.
Frankly your analysis is one-sided. Uncertainty on this level should be making you move towards .5, not 0, you are seeing the absence of conspicuously conclusive evidence rather than conspicuously absent but expected evidence. That sort of observation increases randomness, it doesn’t count as disconfirmation.
We don’t know that the information is being lost (that’s kind of a key point here, and yes we do have ways of telling when information goes missing, actually the fact that we don’t know this is actually pretty good evidence to the contrary). We don’t have any particular reason to think we couldn’t extract it if it is not lost (while there are limits, the laws of physics do in fact allow a marvelous variety of things to be done). We probably won’t kill ourselves off if we’re careful (yeah there’s a reason we worry so much, and it’s because worrying works). Most expensive things not intrinsically resource-expensive tend to grow cheaper on a unit basis when there is enough demand to justify large scale infrastructure and cover initial development costs (I haven’t heard a proposal for why reviving humans would be resource-expensive, as opposed to primarily an R&D cost which really seems the more natural assumption).
It’s interesting that the guys who froze a rabbit kidney in M22 and reimplanted it successfully were in fact cryonicists (motivated by cryonics and funded by investors with cryonics interests), but kept their mouths shut about it in their published paper, making a big point instead about how this kind of research will be of great help for organ and tissue banking.
The trouble as I see it is that for cryonics to work optimally we need direct research on brain vitrification, so that we can minimize the damage that will need to be repaired. (We should strive as a rule of thumb to get it down to where mere engineered biology, as opposed to the more dramatic mechanosynthetic forms of nanotech, is sufficient.) Focusing on whole bodies and other organs is a suboptimal use of time for this goal except where it serves as a useful model for brains. Also banking of organs may become less of a concern from a general medical perspective over the next 50 years or so as we become better capable of regrowing tissue into functioning organs using scaffold and printing models, from a patient’s own DNA.
For evolution we have enough data that evolution comes out as the most likely theory given the data. Without freezing and thawing a brain, how do we get sufficient evidence that we’re storing everything necessary, such that “it’s all still there” becomes the most likely theory?
Most complex scientific procedures need a lot of testing to get right. We can’t test here. We might be lucky, but I do think we have only a ~20% chance of being lucky enough.
Each of those steps needs to go right for cryonics to work. I’ve already moved my probabilities of success towards .5 to reflect uncertainty, but when you multiply them all together you still get a very small number.
I’m not sure what you’re saying here. How would we know whether the current cryonics process were storing everything? The brain is very fragile, with lots of tiny connections, and it seems we could easily be disrupting them.
Sure. I put the impossibility of extraction given that we preserved everything at 5%.
I agree. I put us killing ourselves off at 10% and society falling apart (if we don’t kill ourselves first) at 40%. Together this gives a 46% chance of one of them happening. Do you think these odds are too pessimistic?
Reviving someone means very powerful computers and lots of storage. We might figure out how to do this for some people, but still have it be too expensive for everyone. I do think this is unlikely, though: I put down 10% as the chances that it would be simply too expensive.
Interesting. I didn’t know this.
I was thinking more about the cooling we do for heart surgery.
A good starting point would be electron micrographs of the tissue. Another would be cellular viability assays based on cellular ion pumping action.
We can also rule out competing hypotheses of memory formation which would negate the premise by examining what happens to the memories of humans subjected to hypothermia. Even better, there are some tests on the connectome hypothesis we can plausibly run in the near future (IIRC this involves teaching music to some neurons, then scanning them and recreating the song in the computer).
Then there is the measurement of electrical activity in tissue samples. Organized whole-brain electrical activity should be adequate proof of biological (far more conservative than the standard of information-theoretic) survival. I expect this to be achieved in 20 years or so, given adequate funding and attention. I would expect perfect brain preservation to be a bit further on, probably closer to 50-year.
I agree that the process needs a lot of testing to get right. However the testing can take the form of tissue viability assays, it does not need to take the form of a whole body. That is far more ambitious than a whole brain and thus harder to perfect. Different organs have different optimal cooling rates and perfusion rates. We should only expect that particular test to be possible either when we are at the point of printing bodies and transplanting brains into them on demand, or when we have long since solved the problem of individual organ (including brain) preservation.
Another thing we have going for us is that technology of the future can almost certainly repair the brain to some degree. Even if we are limited to bio-analogous nanotech, such as genetically engineered microbes (and I don’t think this is the case where cryopreserved brains are concerned), we can still expect fairly extensive repair work to be possible.
They don’t all have to go right in the most dramatic 100% manner. For example, you probably do not need the brain to be able to remember more than 90% of the past self to be a valid form of survival, and you might not need more than 10%. To some degree this is a subjective values question. A clone-like individual with vague memories of your childhood may be worth spending a lot of money on, or it may not be… But it’s extremely low probability that you won’t preserve enough information for at least that kind of reanimation, whereas the likelihood that you won’t have any perceptible memory loss at all is also pretty low (for a cryopreservation performed today).
The brain is warm, wet, and protein-based, and has to put up with a lot of chaos on a day to day basis. It is composed of wetware, not hardware. Thus it is reasonable to expect the data to be stored fairly redundantly and with the most important and oldest information in the toughest/thickest connections.
Many of the smaller connections are likely to be lost in a suboptimal preservation, as happens I believe with some forms of dementia. However this is one of the things we can get empirical feedback on by scanning dendrite preservation in small animals as we cryopreserve and thaw them.
Sure, but it makes a big difference whether everything is preserved by say preserving viability in all the cells (a theoretical ideal currently achievable only in slices), versus say fixation, which locks the information in place but completely eliminates viability. All other things equal, a zero-viability mechanism has less probability of being survivable than a viability-preserving mechanism.
Yes I think they are, given how long we have avoided either of these fates so far. Although I’d be cautious about downplaying them since the act of downplaying them could lead to them happening due to lack of caution. Literacy, science, and so forth have a very strong rachet effect on progress. I’d give the conjunction of the two (downfall of civilization and killing ourselves off) a 10% maybe, supposing relative tech stagnation over the next 200 years consistent with no reanimation. Less if tech doesn’t stagnate, since that is more correlated with literacy.
Another point worth raising at this point is that cryonicists have added incentive to look at extinction and civilization-wrecking events using near-mode reasoning. This could be a positive externality worth taking into consideration.
The storage space needed (ongoing) should be no more than that of the brain itself, though I can see some being required temporarily for heavy extrapolations if we need to infer lots of details from badly preserved tissue. This should be correlated with worst case viable preservation (low to no cellular viability, lots of simulation and educated guesswork involved in the reanimation process).
If we get to the point of perfectly or near-perfectly preserved brains, it should be cheap in principle to implant them into printed bodies (possibly with artificial limbs, hearts, etc.) The reason to think simulation is the viable candidate is mainly an extrapolation of Moore’s Law which indicates this kind of storage space should be extremely cheap before we hit physical limits, and the fact that you’d need some hefty nano-equipment to scan on that level of detail anyway.
This is hypothermic medicine, which is both important for cryonics and important to distinguish from cryonics (which is more a subset of cryobiology). The temperature levels involved are much closer to room temperature, not cryogenic. There is an interesting parallel to cryonics in that a patient might be cooled to hypothermic temperatures (similar to a hibernating animal), and if they could be stabilized there they would age more slowly, cancers would progress more slowly, etc.
You’ve convinced me that there’s actually a lot we can do to test how well current cryonics processes work that doesn’t require round trips on human brains.
If we do these tests and adjust the cryonics procedures in response to what we learn, my estimate for the chances that we’re not preserving what we need to will probably go down a lot.
Most of the steps I’m on the negative side for are pretty much binary: either they succeed and you proceed to the next step or they fail and you’re done. Looking at all the ones that I think there’s over a 10% chance of failure, and labeling those where you might get a partial reconstruction:
You die suddenly or in a circumstance where you would not be able to be frozen in time: 0.1 (partial)
Some law is passed that prohibits cryonics before you die 0.1 (absolute)
The cryonics people make a mistake in freezing you 0.1 (partial)
The current cryonics process is insufficient to preserve everything 0.8 (partial)
All people die 0.1 (absolute)
Society falls apart 0.4 (absolute)
Some time after you die cryonics is outlawed 0.2 (absolute)
All cryonics companies go out of business 0.4 (absolute)
The cryonics company you chose goes out of business 0.1 (absolute)
The technology is never developed to extract the information 0.6 (absolute)
No one is interested in your brain’s information 0.4 (absolute)
It is too expensive to extract your brain’s information 0.4 (absolute)
The technology is never developed to run people in simulation 0.4 (absolute)
Running people in simulation is outlawed 0.2 (absolute)
No one is interested running you in simulation 0.3 (absolute)
It is too expensive to run you in simulation 0.1 (absolute)
Other 0.2 (both)
Humanity is far more powerful and capable than it’s been for most of it’s history. It will probably get more that way.
I’d be surprised if promoting cryonics beats straight up x-risk awareness advocacy.
You may be right. If we can get a compact representation this might not be that big at all. At 20 billion neurons with, maybe 100 connections per neuron, four bytes per connection we have ~7TB of information. This is way less than the processing requirements. The WBE Roadmap thinks we might need anywhere from 50 TB to 10^9 TB depending on the level we need to emulate at (p79).
That would be convenient :) Then I don’t have to do anything.
Thanks for the well argued article and comment.
Wouldn’t a world where cryonics is normal have a very high probability of being a world where cryonics is already successful (i.e., people are getting revived successfully all the time) ? I have trouble imagining a world where cryonics is normal and popular, and yet it has no proven track record—unless cryonics became a religious issue, somehow...
Maybe not popular, but a world where it is tolerated to the point where a large number of people sign up for it. And if something like modern rationality becomes popular and also advertises cryonics, that becomes a possibility.
Externalities are irrelevant if it turns out that revivification is impossible and cryonics doesn’t work.
What about externalities that don’t relate to reanimation? For example, we might expect an increase in public confidence in the scientific method rather than supernaturalism as a solution to the big scary problems. Perhaps there would be more STEM professionals and less clergy in a world where cryonics is common. Perhaps it will lead to increased aggregate demand for hard to cure yet rare illnesses (especially forms of brain damage).
I did some back of the envelope estimation and it seems to me that it can be practical (and in fact fairly cheap) to pre-load enough liquid nitrogen for couple centuries. The MRI magnets are example of keeping something cold for years without re-loading. If i were to sign up for cryonics I’d have to design my own coffin, it seems.
edit: note on the MRI magnets—i found out they do use refrigerator on the outer side of the dewar, not sure how necessary is this though if you can add reflector shields into dewar.
edit: I got even better idea. The dewar interior can be magnetically levitated using superconductors on the cold side. The vacuum can be maintained by charcoal cold trap on the cold side (as per standard practice). Nested lightweight reflective enclosures can be suspended around it on thin filaments. Other coolant than liquid nitrogen will have to be used (there will be no vent); i’m thinking of a gas that can be chemically absorbed at low temperature such that evaporation and absorption are net endothermic. Alternatively, solid-liquid phase change could be utilized instead of liquid-gas. This needs a huge and detailed study analysing multiple alternatives to determine the most cost effective approach.
You think we could set things up so there was no maintenance needed? This would give cryonics many of the advantages of plastination.
It sounds really expensive, though.
Well, it is way simpler than hard drive in your computer. The problem is that it is not mass manufactured, and needs a lot of engineering work. (also i’m considering just the brains).
I’m looking at this table for the solid coolants:
http://physics.info/heat-latent/
Frozen ethane looks promising, half the per-kg latent heat of the liquid nitrogen, and it’s probably denser. The freons also have very low melting temperatures and may have high latent heat of fusion but I cant find any data.
It’s also hard to estimate the rate of heat transfer through multi layered reflector. I’ll think it over when I have time. I’m quite rusty on this. For those who want to calculate, the radiation is proportional to temperature to the 4th power (Stefan–Boltzmann law), and in the equilibrium the shield radiates as much as it absorbs, and I dunno how to model shiny metals.
In any case it would be great to just boost maintenance-free time to a couple years. Not sure everyone wants to wake up after event that maintenance was suspended for decades or a century. Anything can happen incl. show trial for some nonsense crimes. edit: or worse yet the copyright expiration on yourself as IP.
My odds aren’t that low, but I’m similarly pessimistic because of the flexibility of one particular step—recovery of my body in time for meaningful information preservation. Most people who die young die of accidents, not protracted illnesses, which means quick vitrification is extremely unlikely. Term life insurance is around an order of magnitude cheaper than whole life insurance, but it will only protect you while relatively young; leaving you on the hook during your retirement-at-fixed-income years where the bulk of the death probability distribution is, and where timely vitrification is likelier.
Clearly if you see larger costs as you age, then the incorrect course of action is to simply do nothing and find when you are old, you have no money to pay for the policy. If you don’t want to spend a large amount when you are old, then save now. Perhaps if you save/invest enough, you will have enough money to simply by a cryonics policy directly.
That used to be the standard wisdom, but it is now disputed. There is evidence that aging stops. True, for humans it does so only when we are already decrepit and stand a substantial chance per year of dying, but some other animals stop aging before that point. So it is not beyond hope that we might make our bodies last much, much longer.
ETA: Googling further, I found Michael Rose’s 55 theses.
And one of the things he writes about is the cessation of aging, of how earlier cessation can be bred for in experimental animals (don’t let them breed after a certain age: the earlier they stop reproducing, the earlier their cessation of aging evolves to be), and of measures the individual can take to promote their personal cessation of aging.
I have only glanced through this enough to think it worth posting here.
That seems bizarre. Is it possible that aging is a means for individuals to signal fitness? (I look really old; I must be really good at surviving!)
Only if you’re going to carry on reproducing at that age.
BTW, I may have misstated the method of breeding for earlier cessation of aging. I can’t find again what I was reading that I described in those words, but other papers of Rose talk about promoting longevity by selecting for late offspring.
I found this bizarre too. So I looked up a paper by the same authors who wrote the book in the “aging stops” hyperlink to investigate.
By aging they mean the increase in mortality rate as a person gets older. Ie an 80 year old is more likely to die this year than a 60 year old.
The theory is that there is a given high rate of mortality that would prevail for the whole of life if natural selection did not exist. However, natural selection does exist and so for the early part of an organism’s life the mortality rate is lower than it would be otherwise. This lowering reduces over the reproductive part of the organisms life. Thus before reproductive maturity the lowering is maximum, and at the end of its reproductive life the lowering is gone and the organism reaches the (high) unadjusted level of mortality. Thus species that stop reproducing earlier reach this high level of mortality earlier.
From our perspective, and using this definition, it would be better if “aging stopped” later, because then we would live longer before reaching the higher mortality rate.
Of course, what we really want is to simly reduce that higher mortality rate permanently.
Interesting, if it’s true the implications would HUGE, but then what mechanism would mediate aging and the eventual stop of it? It all seems rather counter intuitive—at lest to me and reading the preview made me no wiser:
BTW here is a video about the long lived flies aka Methuselah Flies
Edit: I looked through the 55 thesis, and got a somewhat satisfactory answer.
If you can set up a loop − 3d fabrication devices, fabrication tools, damage sensors, passive and active, machines for dissassembling things into basic parts and melting them into scrap, robots for assembling them, some source of power, a database for tracking things, wifi or bluetooth to connect stuff, and made them all modular and redundant, with the robots also assigned to removing and replacing broken parts on each other and everything else—if you can get that to be self repairing in a sustaining way,, you can just add things into its loop in some way. So, hypothetically, you build a big pyramid vault somewhere with a lot of spare raw materials for what gets slowly lost in the recycling process, and you staff it with robots… it won’t last forever but it might last a long time. Maybe you’d even incorporate an organic phase—dump unsalvageable plastic parts into a pool of bacteria or a garden or something, harvest plants, make plastic… it shouldn’t even take nanotech to make a self repairing setup that could care for your cryonically stored brains.
We don’t have much experience in building tech that will last a long time without human interaction. Our society has been moving towards a “bring down the initial purchase price by sacrificing reliability, expect people to buy another when it wears out” for a long time. Even if you do your best to find only parts that are the highest quality and expected to last a long time, you’re not going to be able to avoid planning to use many parts for 10x longer than anyone has ever used them before.
Aside from component reliability, this sounds very complicated. Do difficult and expensive to make, and then issues of system reliability.
Agreed, but I think it’d be a worthwhile project to work towards. I can think of some ways to make it simpler. Recognition of modules could be aided by rfid tags or just plain old barcodes embedded in the objects that have some information about what part a robot is looking at and its orientation relative to the barcode stamp or rfid chip. There could be lines painted on the floors or walls and barcodes visible for navigation around the facility. I guess a really hard part would be maintaining the pyramid or structure or whatever housing everything. You’d have to choose between building something you hope will last a long time and leaving it be—like a big stone pyramid or even a cave. Or you could build it all modular like the rest of it—like a lattice work or robot hive kind of thing. I’m kind of thinking something like these would be useful for city building, too… there was an article in Discover a long while back that referred to a paper by klaus lackner and wendt about their idea for auxons, I think it was- machines that would turn a big chunk of the desert into solar paneling. http://discovermagazine.com/1995/oct/robotbuildthysel569 <--- there. Their suggestion was to harvest raw materials from the desert topsoil using carbothermic separation. I’m thinking you could use something similar for recycling if everything else failed? I don’t know enough about the processes involved. I guess the idea has been a research area for a little bit—http://en.wikipedia.org/wiki/Clanking_replicator … well anyways. The redundancy of the elements involved could overcome some reliability issues. There doesn’t have to be a crucial part of the chain where if one piece breaks down everything is broken. Problems could at least be relegated to disasters affecting whole classes of objects breaking down at once, like if all the robots were smashed at the same time by vault-robbers.
I should say I agree that we don’t have much experience in building tech that will last a long time and that the expense is definitely high. I don’t know that component reliability is as important as being able to replace components efficiently with as little waste as possible. Energy demand is a big concern. Having a fully automated power plant of some kind is a big concern, although maybe solar wouldn’t be so bad. I know you’d still desire to store the heat energy, say, as molten saline, to get steady output, and that could cause big difficulties in the long term. Maybe steady output isn’t necessary though, just frequent enough and high enough output to keep things repaired before too many break down.
At that point you can build self replicator seed and get it onto the moon. I’m not sure why there isn’t enough focus on this.
There are probably good reasons I’m missing. My feeling though is once you get a clanking replicator, you can put more objects into its loop for it to maintain, and grow it up into cities and things that are (eventually) totally self repairing and post-scarcity. Kind of like a big matter-moving operating system. It might only be you know simple at the beginning, but there’d be huge upwards potential for growth and sophistication.
Do you have a Fermi calculation on the probability of it working? I’d be interested, even though I’m in that “there are plenty of other things one can do with money” group.
I’ve been collecting various Fermi calculations into a Google Doc.
I don’t understand the “no one is interested” ones. Wouldn’t they get interested once the amount of money in the trust keeping you frozen exceeds the cost of reviving you?
The “it’s too expensive” one is similar, although it could be expensive enough that nobody is willing to let money gather in a trust long enough to pay for it.
If cryonics becomes illegal, wouldn’t they just move the bodies to somewhere where it’s still legal?
I don’t expect that money to grow much. We’re taking out 1% each year for maintenance, so unless the real return is consistently a good bit higher the money doesn’t grow much if at all.
This is expensive, moving bodies away might be illegal, most other countries are less stable. But yes, it might well be prohibited in a way that has basically no effect.
Um, the citation didn’t contain strong evidence for this claim (in fact, it didn’t even make the claim).
EDIT: Whoops, missed the final paragraph. You did make that claim. Still, I’d say the evidence in that post is inadequate to such a strong claim, for reasons that other commenters have raised.
“Hundreds of years off” is a common estimation people give for technologies that seem really complicated and hard to make with our present knowledge. I’ve always found this fairly ridiculous; it’s pretty much unprecedented in human history. When have we ever conceived of a specific technology, which we had any understanding of the workings behind, and taken hundreds of years to make it? The only cases I’m aware of that any sort of technology has been in-development for that long are if you count things like heavier-than-air flight, where we spent hundreds of years not applying the scientific method to understanding the problem and just threw up solutions willy nilly.
Arguably “immortality” has been on the back-burner for a while.
As far as I know it’s only pretty recently that we’ve actually started applying the scientific method to the whole people dying problem, rather than just tossing up solutions like “invent the Elixir of Life.”
No specific tech for it.
In June 1768 Eramus Darwin told Josiah Wedgewood that Edgeworth had ‘nearly completed a Waggon drawn by Fire’. In modern parlance, a motorcar. It didn’t work, and Edgeworth was a bit of a dreamer; I don’t know how far he actually got. Nevertheless, people saw the technical possibility of motorised road transport that early. Others were also attempting the technology. The motorcar is a good example of a slow burning technological development, taking perhaps 120 years from being obviously possible to being on sale.
This does meet the specifications I had in mind, save that 120 years is less than “hundreds.” It’s the slowest example I’m aware of though, and its inception began before the second industrial revolution, and quite early into the first.
The last 200 years have seen the development of most of the technology the human race has ever created. The first industrial research lab was founded less than 140 years ago. 300 years ago, we were only barely engaged in the process of applying dedicated empirical research to making new stuff. In terms of predicting future technological development, we really don’t have hundreds of years of meaningful data to extrapolate from.
The first that came to my mind was the photovoltaic effect, discovered by Alexandre Edmond Becquerel in 1839. Even today it takes massive subsidies to make photovoltaics competitive. And we are not even close to the energy efficiency of photosynthesis.
I bet there are a lot of other examples.
Citation? Wikipedia gives Photosynthetic efficiency at under 11% and Solar cell efficiency up to 40% for research-grade photocells and one company claims 24% efficiency for their commercial cells.
Certainly we’re not close to the energy/cost efficiency of photosynthesis.
There’s a big difference though, between “This technology has not been realized” and “this technology is not cost competitive with other technologies for similar purposes.”
I’d also mention Fermat’s Last Theorum as a very specific example, since it took over 300 years to prove. It’s rare, but it’s certainly been known to happen.
I wouldn’t call Fermat’s Last Theorem a technology though.
Many times, unless you weasel pretty strongly with “any understandings of the workings behind”. Science fiction has been around a long time. Mary Shelly, writing in 1818 and aware of Galvani’s experiments with electricity and frogs, conceived of applying this to reanimation of the dead. Jules Verne, writing in 1865, conceived of traveling to the moon with a space cannon.
I think you’d have to stretch the meaning of scientific understanding pretty far to claim that the 19th-century writers speculating about reanimating the dead with electricity understood what they were talking about.
Besides, if I’m remembering Frankenstein right, there’s no clear method of reanimation given but it’s at least partly occult: Shelley name-drops several famous alchemists. She might have been inspired at some level by Galvani’s experiments, but the procedure involving a dramatic lightning storm and “give my creation life!” is a cinematic invention—and arguably has a certain occult flavor in its own right, given all the associations with divine fire that lightning’s picked up in culture.
What about: “you’d have to stretch the meaning of scientific understanding pretty far to claim that early 21st-century people speculating about reanimating the dead with uploading understood what they were talking about.”
I could easily see the people who figure out whole brain emulation saying the same of us.
Nah. I can see the scanning procedure needed for whole brain emulation turning out to require some unspecified technology that’s way too difficult for 21st-century science, or Moore’s Law running out of steam before we reach the densities needed to do the actual emulation, but either one would be a Verne-type error; I can’t see a category error on the order of electrical impulse ⇒ true resurrection happening unless we’re very badly wrong about some very fundamental features of how the brain works.
I agree with Nornagest’s interpretation regarding whether people in the 19th century had any idea what they were talking about with respect to reanimating the dead. With regards to space cannons, those turned out to be unworkable and we never made them at all. The gap from “maybe we could get to the moon with some sort of space-rocket” to actually making spaceships was much shorter.
There are plenty of cases where speculative technologies have turned out to be unworkable and were never actually put into use, but that’s an error of a different kind than speculating that a specific technology is hundreds of years off.
Still, there are projects of space rockets as early as 1881 (Nikolay Kibalich), and maybe earlier. Tsiolkovskiy has published his formula for estimating required amount of fuel in 1897, 60 years before first artificial satellite.
In 1822 there were some designs of Babbage’s Difference Engine. His designs dated by 1847-1849 were implemented and worked—a century and a half later. First Turing complete computer to be built was apparently ENIAC… in 1946.
So, for primary wishes of humanity, one century from working blueprint to implementation (of a more efficient plan) is not unprecedented. Of course, we don’t know whether our current cryonics is theoretically enough for preservation....
Due to lack of ways to calibrate, “hundreds of years” cannot be taken as a precise prediction, of course. On the other hand, “we have a general idea” cannot be taken as a prediction, either. After all, fusion power plant could turn out to be simpler than reviving cryonics patients.
http://www.alcor.org/cases.html − 1967, first human cryopreservation. That was 45 years ago. We’ve already been nearly half a century, and I haven’t seen any research that suggests that revivification is likely to occur in the next decade. Calling it a century from the first human cryopreservation, a mere doubling of the time we’ve already waited, does not seem at all like an unreasonable assertion.
Now that I think about it, computing machines might be another good example. To the Wikipedia-mobile, Batman !
Abacuses (or abacii or however you spell them) existed in ancient Babylon, but were obviously quite primitive. Mechanical calculators were developed in the 17th century AD. Charles Babbage designed his Difference Engine in 1822, but the technology to build one did not yet exist (or if it did, it was more expensive than a battleship, which amounts to the same thing). Convergent improvements in computing technology as well as mechanical engineering led to the building of mechanical analytical engines a few decades later. Electronic computers began showing up in the 20th century; and today, we have the Internet for people to twitter on.
I don’t think anywhere along the line though, anyone said “with sufficiently advanced engineering we might create a network of electrical difference engines capable of communicating complex packages of information across the world, but this would take hundreds of years to develop.”
Some technologies are the results of long chains of developments, but I’m not aware of any cases of people conceiving of specific technologies hundreds of years down the chain based on any meaningful understanding of the principles at work.
I would assume that’s because, when you’re working hundreds of years down the chain, chances are very high that you’ll run in to some unexpected obstacle, and the final result will look sufficiently different that we dismiss the previous idea as having missed the true target (for instance, 1800s “space cannons” vs modern rocket ships).
That said, Tesla and Fermat both strike me as potential examples. It’s unclear whether they were making assertions without evidence to back themselves up, or if they really had a decent insight in to what we’d be doing centuries down the road, though. Tesla is largely considered crazy, but Fermat fascinated people long enough that they spent a few hundred years proving his last theorum (hey, an example of a 200 year waiting period!)
You might have a point about information networks; I don’t know enough about the history of computing to know whether Babbage or Turing or someone of that caliber ever proposed them. But there was definitely a need for difference engines and automated calculators, for use in financial and navigational calculations. This is why the technology developed so quickly (relatively speaking) once funding became available and mechanical engineering improved to the point where difference engines could actually be built.
Certainly, I would expect that if I follow the chain of thoughts that led to a technology back through time, the understanding of the underlying principles will grow less and less meaningful.
I don’t have a principled way of establishing where to draw the line along that continuum, and in the absence of such a principled threshold I am not certain how to distinguish this from a “No True Scotsman” argument.
For anyone else to distinguish it from a No True Scotsman argument would probably require me to have been much more precise than I’m in the habit of being in regular conversation, but I have a pretty solid idea of what I meant when I made my original claim, and I’d be able to tell if a particular example met my specifications. Of course, if there were anything really obvious that qualified, I’d be likely to have thought of it before and not made the claim in the first place.
Anyway, it’s possible that there are specific technologies that were conceived of hundreds of years in advance of the point where it was possible to implement them according to the specifications I have in mind, and I’m not aware of it, but if they were conceived of before the industrial revolution, I don’t think we can take it as a very meaningful precedent to generalize from now.
What about industrial steam power ? Hero of Alexandria developed a steam-powered toy back in the first century AD, but it took a millennium and a half before a true steam engine was developed and harnessed for useful work.
If they had come up with the idea, say “we could use this steam power effect to create machines which do work without using the power of living creatures,” but then failed to work out how to do that, I’d say that would count, but as far as I know they did not. Not noticing avenues for technological development is not the same as conceiving of specific technologies but taking a long time to successfully implement them.
Didn’t Hero of Alexandria attach his steam engine to some kind of a door-opening mechanism ? Granted, he probably wasn’t too concerned with “doing work without using the power of living creatures”, what with all the cheap slaves hanging around, but still, at least he knew it could be done...
I’ve weakened the claim to “it may be”.
How soon do you think it is?
I have a wide probability distribution. There’s a chance, of course, that my basic intuitions are wrong in such a way that it’s actually impossible to emulate a human brain and have it preserve the “essential” stuff. Aside from that (which I give less than 5% credence), I’d be quite surprised if it took less than 5 years or more than 100 years of technological development, since we currently have a reasonable roadmap and since there are plenty of other technologies (like nanotech) that would enable massive shortcuts.
(If there were a nuclear war or an anti-scientific dystopia, of course, that would halt the timeline of progress. That’s why I say years of technological development, not of clock time.)
What cryonics is missing is a catchy slogan, something along the lines of “afterlife for atheists”.
That way the atheists wont want it because it sounds like religion and the religious wont want it because it sounds like atheism! Genius!
You mean only people capable of making the choice for systematically rational reasons, rather than what something “sounds like”, will be saved? Fine with me, Darwin at work ;)
(not serious)
THANK YOU. Seriously.
Truth hurts!
Cryonics already pattern matches to religion: we perform a ritual on the dead so that when our leaders have created a benevolent god it can resurrect them to live extremely long, extremely happy lives.
You want to make the pattern match central in cryonics marketing?
This entire conversation reminds me of the sig quote from TvTropes/Slashdot:
Every life is precious, according to EY/HJPEV, and most people currently alive are religious, so you have to speak their language to sell the message.
Dithering even for a second because of the misplaced qualms about the way you might come across is a crime against transhumanity.
Imitating the surface features of successful memes can be a cargo cult if you don’t stop and think about why they’re effective. Can you imagine how an actual religious person would react to that slogan?
Are you being sarcastic?
I didn’t realize I had to spell it out… People here seem to be so serious about certain topics. Well, semi-snarkastic, really.
The slogan needs a Futurama reference in there somewhere.
I see cryonics as a kind of insurance policy. Fitting, since it is literally a life insurance policy that funds it. As I continue to live on to see technology improve at an exponential* level, I can continually refine my options as needed. If I live to be 60 and by then, the cost of cryonics has dropped to $10 (which is admittedly an optimistic estimate), I can take the money I’ve paid in and withdraw both it and the interest it has accrued. If instead the cost has not dropped, or in fact has increased or even if the prospect of cryonics has disappeared entirely, I can still take the money and use it on expensive wine. :) If I have a heart attack next week, I will have lived out my last few days with the comfort that my cryonics is funded and the probability of my brain decaying immediately is far greater than zero and that of my being revived someday is not approximately zero.
*Or in some cases, linear
What?
The previous sentence was: “Except that we can make mechanical hearts and you stay the same person on receiving one.” The first of these two statements is obviously false, and the second is false in the absence of that detailed brain-machine synch referred to elsewhere, which we don’t have.
Haven’t we had mechanical hearts since 1982?
By “mechanical brain” I was trying to indicate something much simpler than an AI.
You have maybe. I’ve had a biological heart since 1982 (in January).
“When a man’s an empty kettle, he should be on his mettle, yet I am torn apart...”
Oh it’s mechanical alright. Newtonian, really.
Are you claiming that the following sentence is obviously false?
Because it strikes me as obviously true.
It seems likely to me that he means “we can make mechanical brains” is false.
Another aspect which I have not seen discussed so far is of a quasi-philosophical nature: What makes you “you”? Yes, it is possible that my brain could have been started up last Thursday, giving me the illusion of the memories I have. But say I made that same copy next Thursday. That being would have all of those memories that I have, and all prior associations and synapses, but if he had an accident with a nailgun the next day, I wouldn’t feel any physical pain. That is because he is not me. Even if he has almost identical physical circumstances, there is no continuity between him and me. If I died before the nailgun incident, it would not be me that felt the pain.
Instead, let’s say I froze my brain. My brain that makes me me, stops working. That makes the thing that is me cease to exist. The continuity stops there. The person that you revive 200 years into the future may have the same state that I left with, but it wouldn’t be me in the sense that makes you want to preserve “yourself” via cryonics in the first place. It defeats its own purpose.
If it is your pattern, your personality, your way of life that you want to preserve, you win. Parents attempt this all the time, tribes, nations, and cultures do it. I have replicas of myself and relatives on The Sims 3 that I identify with. He acts the same as if I was in that situation, so he is in effect a copy of me. If you downloaded your brain and copied it, you could have copies that would do just as you would, but the one in the body reading this comment right now, that’s the instantiation that you most likely care about preserving. Sure, it would be cool to have someone exactly like you living in the far future, but you’d only be the brain donor who died to create him.
The only way of preserving yourself indefinitely is to gradually transition your personality onto better hardware. Currently, it is my opinion that ethics is the only thing preventing the creation of a BCI suitable for the task.
When a wood frog freezes itself for the winter does it unfreeze as a different frog?
What? If your friends asked questions to both you and your Sims 3 replica, you think they wouldn’t be able to tell which was which? It’s clearly not anywhere close to a copy of you.
You’d have to ask the wood frog. If the wood frog were to ponder its own existence, it would be a different frog. In both cases, human and frog, it’s the same body before and after freezing. Note, frogs do freeze themselves and still function afterward, humans don’t. Humans have a lot of their energy invested in cognitive functions and they don’t act quite the same without it, and that’s what freezing demolishes.
The best answer to this would be to not reply, that would be highly amusing, but aside from that, I hope you understood that I meant in the Sims World, that is how I would react. Sims don’t even need to breath air, they don’t have lungs. Given the somewhat different conditions in Pleasant Valley, Sims have somewhat different requirements than humans. If I modified my human body to become a Sim, then I would be exactly what I designed there.
I’m not sure I even understand what claim you’re making. Just to ask a simple question: if you performed the modifications you are envisioning, would you anticipate remembering having done so? If so, does your Sims 3 replica have a corresponding memory? Does it have the capacity to access such a memory, were one created for it?
If I performed the modifications I am envisioning, I do not anticipate the end product (the Sim version of me) to remember having done so. Sims have memories, but no memories regarding events outside of the Sims 3 environment. If human-me were to specifically write in that memory, a player of TS3 would see that memory, but the Sim himself would only manifest different behaviours based on the moodlet effects attached to that memory (akin to conditioning), but would not specifically understand what had actually happened.
It’s more in the sense of, “If I were a ladybug, how would I act?” Of course, I would act exactly like any other ladybug would, but The Sims are designed to look and act much more similar to humans than ladybugs are.
My point was going to be along the lines of “I as a human can identify with my Sim copy, given my social primate and cognitive human skills, and yes, a computer program that works exactly like me, or a brain image of me that is loaded onto a cylon, or a magically revived frozen body that my consciousness used to be running on, I can identify with all of those things too, but I am aware that the me from right now is not going to be in any of those. Gradual hardware upgrade is the only thing which will preserve the future-body descendants of now-me. And while near-death-me might disagree, now-me is actually okay with just replication. I’m just surprised to see so many people who are otherwise rational turn a blind eye to this issue.
Hey, what are you reading downvoted comments for? :)
I’m a rebel that way.
Thanks for clarifying.
For my own part, if I get to choose now between a future where something exists that remembers being me but has absolutely no continuity with my current body (that is, is not a “future-body descendant of now-me”), and a future where something exists that has continuity with my current body but does not remember being me, I choose the former. (Of course, both might suck, depending on other details.)
Whether either of them is “really me” seems like a confused question.
And, sure, within the space of possible configurations of a system with many fewer degrees of variation than I have, it’s possible to select the most me-like available configuration and identify with it on those grounds, which allows me to single out a particular Sim, or a particular ladybug, as being “me”. I consider this sort of identification to be similar to how people identify with a football team or a rock band, though, and not particularly relevant to what we’re talking about when we talk about preserving individual identity in an artificial matrix.
Now we’re getting somewhere! What I am trying to say is that when we are talking about preserving individual identity in an artificial matrix, we are mistakenly identifying with the copy because it closely resembles us, it’s “Team Me”, but really it is “Me” that we want to preserve, not Team Me.
Well, I agree that what gets preserved within an artificial matrix is in an important sense “Team Me” rather than “Me”. But I would say the same thing about what gets preserved within a future-body descendent of now-me.
Whereas it sounds like you would say that what gets preserved within a future-body descendent of now-me is really “Me” rather than “Team Me”… yes? If so, what grounds do you have for believing that?
More generally, I think the concept of “Me” as distinct from various degrees of “Team Me” membership is confused and doesn’t carve reality at its joints. There’s no such thing; all there is is various degrees of “Team Me” membership.
I also think that the degree of “Team Me” membership a Sim or a ladybug is capable of is radically different from (and inferior to) the degree of “Team Me” membership a high-fidelity copy of now-me or a future-body descendent of now-me can have, such that equating the two is importantly misleading, though in a technical sense accurate.
Agh! You just killed “Me”! Thank you! It is true, the only distinction that I had really made between Me and Team-Me was classical physical continuity, that is the only place I could see to draw a line. If there is no line, and yes, I fearfully agree with you on that, then my reason for uploading or freezing (aside from survival-instinct projection) is to preserve something that runs on the same or similar programming as the rest of Team-Me. From an objective point of view, I wouldn’t really consider my pattern worth preserving. What can Team Me do that a sufficiently advanced (and most likely more efficient) AI couldn’t do better?
You sound like you’re implicitly treating that “objective” point of view as more important than your actual (presumably subjective) point of view.
Is that true?
If so, on what grounds?
To answer your question, that is not true. The objective point of view and subjective points of view are equal, because they are just different points of view. If it sounds like I consider it more important, it is only because my mind actually does agree more with the objective view. At this point in time the subjective point of view is foreign to me. I find the big picture so fascinating that I become less concerned with my own part in it, but I don’t expect that of anyone but me. I am more interested in creating something good than preserving something flawed, even if that thing happens to be me.
How is sleep, unconsciousness, deep anesthesia any different, though?
But further, why is continuity important? If intelligence can be simulated on a computer, and it seems likely that intelligence sophisticated enough to ponder it’s own consciousness probably really is conscious, why would a reboot have any effect on its identity?
In any case, I don’t have any answers. Eliezer’s Identity Isn’t In Specific Atoms for me seems to suggest cryonics is probably unnecessary if I can instead do a molecule-level brain-image upload before death (assuming that turns out to be possible). But if that’s so, don’t we also need to reject the idea of a personal future?
Those two questions are two sides of the same coin to me. Those examples preserve continuity in the form of synapses and other neural connections. In none of those cases does the brain actually stop running, just the consciousness program. You can’t just pull out someone’s heart while they’re anesthetized—if the brain’s cells die from lack of fuel, you’re destroying the hardware that the consciousness program needs to reboot from.
Assuming that you have programmed it to care about its own consciousness, not just to ponder it, the first boot would die, and the reboot would wake up thinking it was the first boot.
When you upload your brain-image, please make the most of your life after that, because it would be the same as with the computer. You will die in fear and loneliness, and your copy will wake up convinced he is you. (That would make a great fortune-cookie message!) In both cryonic preservation, and brain upload, the original quantum system which is you is being shut down—no splitting realities are involved here (except the usual ones)-- you are going to experience death, and it was my understanding that the point of cryonics and mind transfer was to avoid experiencing death. (By “experience death”, I mean that your mind-pattern ceases to function.) Anyone deriving comfort from those two methods should seriously consider this concrete downside to them.
But if a consciousness can be simulated on a computer running at multiple GHz, would not a simulation on a computer running at one cycle per hour also be consciousness? And then if you removed power from the computer for the hour between each cycle, is there any reason to think that would affect the simulation?
My intuition as well. Continuity seems less of a big deal when we imagine computer hardware intelligence scenarios.
As another scenario, imagine a computer based on light waves alone; it’s hard to see how a temporary blocking of the input light wave, for example, could cause anything as substantial as the end of a conscious entity.
However, if I think too much about light waves and computers, I’m reminded of the LED cellular-automaton computationalist thought experiment and start to have nagging doubts about computer consciousness.
Perhaps I misunderstood what you meant by “reboot”. The situation you are describing now preserves continuity, therefore is not death. In the first situation, I assumed that information was being erased. Similarly, neural cellular death corrupts the entire program. If there was a way to instantly stop a human brain and restart the same brain later, that would not be death, but freezing yourself now does not accomplish that, nor does copying a brain.
(Unimportant note: it wasn’t I who brought up reboots.)
Anyway, I believe that’s why cryonics advocates believe it works. Their argument is that all relevant information is stored in the synapses, etc., which information about is preserved with sufficient fidelity during vitrification. I’m not sure about the current state of cryopreservatives, but a good enough antifreeze ought to be even able to vitrify neurons without ‘killing’ them. Meaning they can be restarted after thawing. In any case cellular death should not “corrupt the entire program” because as long as no important information is lost, we can repair it all.
I’m much less confident about the idea of uploading one’s mind into a computer as a way of survival since that involves all sorts of confusing stuff like copies and causality.
How do you know that?