Cryonics: Can I Take Door No. 3?
If you don’t believe in an afterlife, then it seems you currently have two choices: cryonics or permanent death. Now, I don’t believe that cryonics is pseudoscience, but it’s still pretty poor odds (Robin Hanson uses an estimate of 5% here). Unfortunately, the alternative offers a chance of zero. I see five main concerns with current cryonic technology:
There is no proven revival technology, thus no estimate of costs
Potential damage done during vitrification which must be overcome
Because it cannot be legally done before death, potential decay between legal death and vitrification
Requires active maintenance at very low temperature
No guarantee that future societies will be willing to revive
So I wonder if we can do better.
I recall reading of juvenile forms of amphibians in desert environments that could survive for decades of drought in a dormant form, reviving when water returned. One specimen had sat on a shelf in a research office for over a century (in Arizona, if I recall correctly) and was successfully revived. Note: no particular efforts were made to maintain this specimen: the dry local climate was sufficient. It was suggested at the time that this could make an alternative method of preserving organs. Now the advantages of this approach (which I refer to flippantly as “dryonics”) is:
Proven, inexpensive revival technology
Apparently the process does not cause damage itself
Proven revival technique may overcome legal obstacles of applying before legal death
Requires passive maintenance at low humidity (deserts would be ideal)
Presumably lower cost makes future revival more likely (still no guarantee, but that is a post in itself)
There is one big disadvantage of this approach, of course: no one knows how to do it (it’s not entirely clear how the juvenile amphibians do it) or even if it would be possible in larger, more complex organisms. And, so far as I know, no one is working on it. But it would seem to offer a much better prospect than our current options, so I would suggest it worth investigating.
I am not a biologist, and I’m not sure where one would start developing such a technology. I frankly admit that I am sharing this in the hope that someone who does have an idea will run with it. If anyone knows of any work on these lines, or has an idea how to proceed, please send a comment or email. Or even if you have another alternative. Because right now, I don’t consider our prospects good.
[Note: I am going on memory in this post; I really wish I could provide references, but there does not seem much activity along these lines that I can find. I’m not even sure what to call it: mummification? Probably too scary. Dehydration? Anyway feel free to add suggestions or link references.]
- 5 Sep 2012 17:29 UTC; 7 points) 's comment on Welcome to Less Wrong! (July 2012) by (
Unfortunately, inserting complex novel gene sequences into every cell of an organism in a way that doesn’t just cause massive, global cancer is very hard problem. Making those sequences do what you want them to do, and not, say, kill the target organism is even harder. Especially since human anatomy isn’t well suited to the task, and would need to be modified. By the time we have the technology to do something like that, death is probably already a solved problem.
That said, I’ve used the premise in a science fiction book before. The main characters were members of Homo Sapiens Durabilis, and had genomes modified with tardigrade genetics. They could be pumped full of hydrogen sulfide, and reversibly dehydrated to death for long-term space travel, or during a medical emergency.
By the way, what was the name of the book?
“Morse Code”. But it wasn’t working thematically, and I abandoned the project. I’ve written a few other stories in the same universe.
Why is global cancer the primary risk rather than not being viable at all?
I’m most familiar with gene therapy issues causing cancer, but that might be an availability bias—in those studies where gene therapy simply kills the relevant cells, I’m sure very little is published.
The traditional way of inserting a gene into the genome is to use a retrovirus with its DNA replaced. Most such viruses (at least, that have been used) incorporate randomly, meaning that there is a small but nonzero chance every time a new cell is modified that it will knock out a gene that is important for controlling cancer. On a cellular level, the most likely cause of this is cell death, as the rest of the cell’s anticancer mechanisms shut down the cell. But of course, this doesn’t work every time.
There are specific viruses (i.e. that always integrate at the same, safe genomic location) currently being developed, and it’s hoped that these will solve the problem.
However, there’s actually another related problem. If you want to make major changes to the cell (like reprogramming it into a stem cell), the cell’s anticancer mechanisms will detect that as well, so in order to make those changes you have to at least temporarily shut off some of those mechanisms. So there is a risk for cancer in that as well.
About the topic of this thread—generally, the ability to survive specific extreme environments (especially one that affects everything in the cell such as changes in water content or temperature) is a specialized adaptation. I would not be surprised if there are global differences in the genomes of these species, e.g. most proteins are much more hydrophilic, or there is a system of specialized chaperones (=proteins that refold other proteins or help prevent them from misfolding) plus the adaptations in proteins that allow the chaperones to act on them, and further systems to repair damage the chaperones don’t prevent. It is unlikely that only a few genes would be involved, and unless a case can be made for evolutionary conservation of the adapted genes to humans, we wouldn’t have most of them (in fact, any genome-wide changes would mean that we would have to adapt our own proteins in new ways, just because we don’t share all of them with the species in question). Cold temperature is actually a special case here, because it slows down everything and thus reduces the amount of “equivalent normal-temperature time” that has passed. It’s still difficult (and of course none of these are impossible), but I don’t think it’s likely that small-scale gene therapy would be sufficient.
Cancer sounds like not-viable to me :P
I meant not even being viable enough to get cancer.
Would it require gene therapy? Could there not be a more direct method of intervention to achieve the result?
The physical structure of the cells have to change. You also don’t see this sort of behavior in large organisms, so there may be serious engineering challenges with the dehydration mechanisms in large animals. You’re essentially going to need powerful, global, highly specific gene therapy at the bare minimum. It might not be possible without engineering a new organism from scratch.
That’s a fair question. I was assuming that creatures which can survive full dehydration are so different at the cellular level that nothing less than genetic redesign would do the job, but I’m guessing.
People die as the result of very moderate dehydration, so considerable change of some sort would be required.
It’s plausible that if dehydration and revival are possible for people, then the methods wouldn’t be much like what’s evolved—people don’t fly the same way birds do.
I think part of the problem, too, is that animals who can survive full dehydration, or being thoroughly frozen within and without, are small. We don’t often realize just how big humans are, even for land-dwelling tetrapods. We’re very large, very active, and very resource-intensive—certainly there are bigger land animals about today and in our recent past, and very much bigger ones in the fossil record (brachiosaurs, anyone?), but even then we still qualify as megafauna.
The consequences of that size, especially in light of our activity level, are significant. Human physiology is very adapted to dissipate heat well (and our water intake is a big part of that), yet we still routinely have trouble doing it fast enough to avoid ill effects, forcing us to adapt culturally and individually to the problem. We have to conserve quantities (of temperature regulation, of water) at fairly specific levels; our physiology is critically dependent on them.
So, yeah—if people can be put in suspended animation of some sort (regardless of mechanism), it’s gonna have to take our particular case into account. You can flash-freeze a mouse, thaw it, and get biological activity after (they don’t exactly go on to live long and prosperous mousey lives, but they do come out the other side for a bit). A mouse is tiny; you can’t extend that to a human without different physics becoming relevant. You can dehydrate a tardigrade quickly (just let it do its thing in a low-moisture environment for long enough until it loses enough water) and then leave it sitting until it gets doused again; you can’t do that to a human, because we have a lot of water to lose, our bodies fight to hang on to it, our health declines rapidly as we lose even modest amounts, and we proceed straight to death once quantities are insufficient.
I didn’t find any amphibians which survived complete dehydration, but I found an insect.
The useful word is anhydrobiosis—but no amphibians are mentioned.
Also, http://en.wikipedia.org/wiki/Tardigrade—who survive vacuum of space.
I bow to your superior Google Fu. It may have been invertebrates rather than amphibians,as I said, I was going from memory. (I can already improve the post!)
I had really hoped to promote discussion on the concept for human preservation. I had looked through the cryonics links and hadn’t noticed any discussion around this concept. In fact, I have never seen it suggested as an alternative, but thought this community would be a great place to kick it around. Thanks for your response.
Dehydration seems like a cool idea in the abstract, but I don’t know nearly enough biology to say whether there’s any way to get from here to there.
Believing in afterlife doesn’t grant you one more option. This is a statement about ways of mitigating or avoiding death, and beliefs are not part of that subject matter. An improved version of the statement would say, “If there is no afterlife, then...”. In this form, it’s easier to notice that since it’s known with great certainty that there is no afterlife, the hypothetical isn’t worth mentioning.
I’m convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live “outside” their simulations after their “deaths”. Since one cannot feel one’s own nonexistence, I totally expect to experience “afterlife” some day.
The word “expectation” refers to probability. When probability is low, as in tossing a coin 1000 times and getting “heads” each time, we say that the event is “not expected”, even though it’s possible. Similarly, afterlife is strictly speaking possible, but it’s not expected in the sense that it only holds insignificant probability. With its low probability, it doesn’t significantly contribute to expected utility, so for decision making purposes it’s an irrelevant hypothetical.
Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger’s experiments with 1⁄2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it’s not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one’s time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
See also this post: Preference For (Many) Future Worlds.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: “You don’t believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. ”
The primary purpose of decision theory is to determine good decisions, which is what I meant to refer to by saying “for decision making purposes”. I don’t see how “expressing honest expectation” in the sense of your example would contribute to the choice of decisions. More generally, this sense of “expectation” doesn’t seem good for anything except for creating a mistaken impression that certain incredibly improbable hypotheticals matter somehow.
See also: Preference For (Many) Future Worlds.
I think you may be treating your continuation as a binary affair (you either exist or don’t exist, you either experience or don’t experience) as if “you” (your mind) were an ontologically simple entity.
Let’s say that in the vast majority of universes you “die” from an external perspective. This means that from an internal perspective, in the vast majority of universe you’ll experience the degradation of your mental circuitry—whether said degradation lasts ten years or one millisecond, you will experience said degradation up to the point you will no longer be able to experience anything.
So let’s say that at some point your mind is at a state where you’re still sensing experiences, but don’t form new memories, nor hold any old memories; and because you don’t even have much of a short-term memory, your thinking doesn’t get more complicated than “Fuzzy warmth. Nice” or perhaps “Pain. Hurts!”.
At this point, this experience is all you effectively are—it’s not as if this circuitry will be metaphysically connected to a single specific set of memories, or a single specific personality.
Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix. And therefore it will experience an afterlife—in a sense. But not necessarilly an afterlife with memories or personality that have anything to do with your present memories or personality, right?
Quantum Immortality doesn’t exist. At best one can hope for Quantum Reincarnation—and even that requires certain unverified assumptions...
There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what’s left of me to all my best parts and memories retrieved from an adequate backup.
Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This nonsensical act will surely happen in some universes, but I do not expect to perceive myself as existing in these cases.
It seems you are right that gradual degradation is a serious problem with QI-based survival in non-simulated universes (unless we move to a more reliable substrate, with backups and all).
True. Believing doesn’t grant more options, but if you truly believe in an afterlife, then this is not a question that would concern you: you believe you have a better option. :)
If you believe in an afterlife, the question that concerns you is still whether there is an afterlife, not whether you believe in an afterlife. So you still should worry about the hypothetical of there being an afterlife, which you’d assign more probability, not about the hypothetical of you believing in an afterlife.
I think we are assigning different meanings to “believe”. In my sense, a true believer has no doubt, so “whether” is no longer a question. I think we may be getting sidetracked on semantics, though.
The overwhelming majority of the human population disagrees with you. Yes, rationally, we know with great certainty there is no afterlife. (well, at least we know with almost the same certainty that there is no flying spaghetti monster, the p value of each possibility is infitesmally smaller than 1)
But we choose to accept unverified statements from our elders regarding the afterlife rather than dwell on the facts of death and fail to procreate.
The jargon term is “cryptobiosis”, and revival is “anabiosis”.
The main problem with dehydration as I understand it is similar to that of cryopreservation, but worse: dehydration causes cells to shrink which damages organs. It also concentrates cellular components (salts, proteins, etc.) to the point where they start interacting with each other harmfully.
That said, it’s an interesting starting point. Mike Darwin has proposed replacing cellular water with some kind of solvent carrying monomers that form a hard polymer under controlled conditions, possibly similar to Amber. Once it polymerizes and forms a glass, the cell’s components would be unable to interact with each other. (The organism would be cooled to −20C using M22 for cryoprotection beforehand to minimize metabolic damage.)
The hard thing is getting a high concentration of anything into cells without rupturing them. Organisms like Tardigrades that achieve cryptobiosis (= anhydrobiosis) manufacture their own polymers such as Trehalose. Plants do the same with Sucrose. Cells have a special transport protein that yanks glucose (the most common sugar monomer) inside through the lipid membrane very quickly relative to their natural diffusion rate. Note that this is useful for cryoprotection, not just against dehydration.
Lately I’ve been wondering if Foldit could be used to design proteins that pull other things into cells faster. Could such an enzyme be programmed to embed itself in the cell wall? Perhaps something more like a virus could do this. Or perhaps a custom protein could turn glucose into a more suitable polymer under the right conditions.
Chemical fixation (sometimes called “plastination”, although this conflates the practice with an unrelated procedure) is an in-progress technology to preserve brains at room temperature, and is being evaluated alongside cryonics by the Brain Preservation Foundation: http://www.brainpreservation.org/
It would probably be cheaper than cryonics, and would require much less long-term support—you can throw the brain in a shoebox instead of constantly maintaining it in liquid nitrogen. It still lacks a revival mechanism though—the current hope seems to be preserving enough information to get it back via slicing and scanning later.
Alcor’s magazine Cryonics just published my article titled “Cryonics and the Singularity.” It’s on page 21 of this:
http://www.alcor.org/cryonics/Cryonics2012-4.pdf
The article argues that if you believe in the likelihood of a coming singularity you should sign up for cryonics.
Uh, you skipped a step. The bottleneck is trait-selection/gene therapy more than it is knowing where the gene loci are. We know the signatures of Huntington’s and some other genetic diseases, but that hasn’t led to the ability to cure them. Right now, we can only negatively select through abortion, so that wouldn’t create the geniuses you’re looking for.
See my article “A Thousand Chinese Einsteins Every Year” for a more detailed explanation. I’ve learned a lot since writing this article (in 2007) and my latest views on the potential of eugenics are fully spelled out in my book Singularity Rising that will be released in a few weeks.
Another possible hard part: if world-shaking genius (not just being unusually smart) is the result of having the sort of mind which fits a solvable hard problem, then how would anyone know what traits to amplify and what education is needed?
Don’t underestimate the power of g:
http://www.udel.edu/educ/gottfredson/reprints/2002ghighlygeneral.pdf
Linda Gottfredson’s papers in general reward study:
http://www.udel.edu/educ/gottfredson/reprints/
Upvoted for clear setting of a line of reasoning
Pros for Down: Vague logic jump based on surface phenomena, organising rather than executing work
Pros for Up: Acknowledging ignorance, clear explicitation and individualization of stance
Thanks for the clear feedback. I can see that posting to this forum is going to be a humbling, if valuable, experience :). Any thoughts for improvement?
As many of the comments have pointed out the point raised is not the only viewpoint. Running with the new situation from different angles could have produced fruitful thought that could have been applied with the post.
Cryonics has details worked out while the hydronics hasn’t. Thus it’s somewhat likely that you are comparing the weak points of cryonics to good points of dryonics. Hunting for a better method it’s all good but it can make the comparison accidentally better than it would be after a closer investigation. The cryonics side of the comparison is fixed while the new method side works with just what is apparent.
Say that I think of methods to move in space beside rockets. I might think of dropping behind nuclear bombs to improve energy extracted per mass used. This might be all nice while only thinking about pushing a craft forward. However if I stop to think about other implications the situation doesn’t seem too rosey: there might be radioactive products left behind, there can be significant forces to nearby other vessels or habitats, it would be trivial to weaponize. These disadvantages might be overcome with some design but it’s far from “go faster” kind of magic button. And I don’t need high technical abilitity to realise that those sorts of drawbacks are possible.
With dryonics it likely needs some support from cell chemistry. Changing the cell chemistry on a already alive human could be somewhat messy. And even if it would be adjustable it is somewhat likely that human cells do interesting things that conflict with such “design constraints”. How much immune system efficiency, alcohol tolerance or metabolism speed would be ok price to pay for the advantage? Even if successfully dried people would require less energy upkeep protecting them from erosion might bring the cost closer to high tech upkeep. At room temperature the surrounding bacteria can be active. Would they be vulnerable to winds, sounds or earthquakes?
If we only want methods that work in principle regardless of details you can always plan for a round trip in the stars to use the twin paradox to be subject to the expertise of future doctors. The question is only whether the details of time dilation, cryonics or dryonics are doable. Thus skipping or being ignorant of the details doesn’t help that much. Finding a new preservation mechanism mainly extends the frontier where concrete progress can be made. So eventually before long you have to dig deeper. And doing today what you could do tomorrow ensures you don’t get stuck in the past.
I certainly didn’t intend to imply that this was the only viewpoint, or even that it was necessarily better, only that it addressed some of the issues with what seemed to be the only current possibility. I agree that it would require considerable research into how to achieve it: my point is that these would be upfront costs, whereas cryonics has backloaded costs (technological as well as financial). I also did not mean that a “hydronically” preserved organism (I like your term) could be stored anywhere, simply that it is easier to establish passive storage. Egyptian mummies lasted thousands of years in their dry, desert tombs, but can decay rapidly when exposed to moister climes. Bacteria need warmth and water to be active: removing one or the other is sufficient. We already preserve food at room temperature using the same principle (salt or sugar both preserve food by dehydrating bacteria).
The fact is, we do not currently have a reliable means of arresting a human’s metabolic processes (including post-mortem decay) and restoring them. We don’t have the details for restoring cryonically preserved persons. “Advanced nanotech” is just a mysterious answer until we know how to do it. The intention of the post was to stimulate thought (which I think it has done). I do not believe I have to have all the answers before I can ask the questions. New ideas arise from making new connections between existing concepts, and sometimes this means concepts existing in two different minds.
Personally, I’d rather just go on existing here and now. Preservation is just a backup option, much like backing up your computer files: you’d rather not have a system crash, but if you do, you can recover. On the other hand, cryonics is our only current “backup” option, so the choice is a “no-brainer”. Even a slim chance is preferrably to no chance.
Agreed, but I don’t know where to begin digging. Which is why I threw this open to the forum.
I’m not sure I understand what you mean by that: don’t put off what you can do today?
Making small firm steps at a time is easily supported. Taking only a single step for not knowing how to take more is very probably underapplying ones knowledge. If the reasoning can go on with basically a empty reply from another party it’s likely thought was suppressed very early. If one strives to take things to their logical conclusion this is a bad thing.
If it’s not clear do understand that the post was supportable. I could just convince of ways it could have been awesomer. I could have communicated better what kinds of more sharper thinking could have happened in writing this post or atleast not detract attention (needlessly lenghten) with on topic content from the thinking options available. Instead of just settling for the first step one could say to one say : “I need to go deeper” que inception music. And you propably want to do that in the first place instead of waiting around for a demanding reason to do it.
I have just recently starting to vote what I read and explicitly state my reason for that decision. Not all people want to have every detail rubbed against their face. When asked I can elaborate. I might not be adept enough in rationality foruming to offer a detailed analysis of what went wrong or help what can be done that such shortcomings don’t happen in the future. Because of known tendency that people don’t tend to cast themselfs as villains in their story, for precaution, I will also mention that this is likely to be a newbie-newbie interaction as discussed on the “eternal september” threads.
But I do vote and say why I vote and I hope that that is more valuable than my explanations being misleading/confusing is detrimental. I don’t know, I am experimenting whether it works. I could easily be that the long explanation is just noise with the signal being in those word or phrase like descriptions.
I appreciate the feedback, and the more detailed the better. I am always looking to improve my own effectiveness, especially in communication. One of my most frustrating, and unfortunately all too common, experiences is thinking something through, coming up with what turns out to be the correct answer, and being unable to convince others. (I am not suggesting that I have the right answer in this case; in fact, the odds are that I don’t.) To me, the more specific the feedback, the better. So, for example, dissecting the post, saying “this is good”, “this could use more support”, “this does not follow”, etc., is extremely helpful (to me, anyway).
As a measure of the value of your feedback, I have upvoted your responses, because I do find them useful. So I hope that provides some good feedback for your own experimenting :)
Why this proposal is a bad one :
Cryonics is based upon a working technology, cryogenic freezing of living tissues.
The latest cryonics techiques use M22, an ice crystal growth inhibitor that has been used to preserve small organs and successfully thaw them. More than likely, if you were to rewarm some of the tissues from a cryonics patient frozen today, some of the original cells would still be alive and viable. I don’t know if this particular experiment has been performed, however : there is a reason why cryonics has a bad reputation for pseudoscience.
If you dehydrate a mammalian cell and then add water again, it’s still dead. If you freeze and rewarm, heating and cooling at a rapid enough rate to prevent ice crystal growth, not only is the cell alive, but it can be more viable than newer cells later. Cryogenically frozen sperm or ova from a young person can be more viable than the same substance obtained from the same person later in life.
There are further improvements to cryonics that have not been made because it lacks the funding and resources it deserves.
Better cryoprotectants are more than likely possible. Better techniques are almost certainly achievable. The method used to preserve a viable rabbit kidney used extremely rapid cooling. Cooling the brain more rapidly might yield better results. There are potentially revolutionary improvements possible.
Allegedly, a Japanese company claims that oscillating magnetic fields prevent orderly crystal growth by water. They have experimental results and succes in preserving human teeth this way. If this method is viable, cryonics could use very large magnets on the human brain and potentially get perfect preservations with demonstrable proof of viability. http://www.teethbank.jp/ http://singularityhub.com/2011/01/23/food-freezing-technology-preserves-human-teeth-organs-next/
The first source I think is a better one : As far as a google search will tell me, this is the only existing human tooth bank in the world. If the teeth weren’t viable it seems unlikely that credible dentists would be attempting the transplants and succeeding. (I think the technology being used is a lot better indication of it being legitimate than papers or singularity hub articles)
Depends on what you mean by “working”. When we successful freeze and revive a mammal, I will concede the point. And its still our best backup option (to not dying). Cryonics has a head start on other possibly techniques, because it was the first conceived and there are people working on it. That doesn’t mean it’s the best or only possibility.
My proposal was for further research, not to start doing it. I admitted we don’t know how to achieve a non-hydrated state capable of recovery, or even if it can be achieved. And this was certainly not intended to be an attack on the work being done on cryonics, just a suggestion that there may be other ways. Speaking of which: DARPA seems to be working on yet another approach. I think as a society we have sufficient resources to pursue various options. I have no horse in this race, I just want to see the finish! :)
Cite please?
Physical and biological aspects of renal vitrification.
Cryopreservation of organs by vitrification: perspectives and recent advances (PDF).
EDIT: I should clarify, the kidney was cooled with liquid nitrogen vapor and the lowest temperature it was exposed to was still fifty degrees above that of Liquid Nitrogen. This is important because LN2 temperature is far below the vitrification point of M22, and cooling even a little below T_g causes fracturing.
Yes, but it doesn’t fracture everywhere. Hence, if you rewarmed a tissue that was cryogenically frozen, some cells would probably still be viable. Hence, my hypothesis that if you took samples from a current patient where things were done right, some of the cells would still be alive.
A related article : http://www.nature.com/ncomms/journal/v3/n6/full/ncomms1890.html?WT.mc_id=FBK_NCOMMS
What about a fracture that severs the brain in several pieces?
There are fractures like that in existing patients. Note that my hypothesis is that some of the cells would still be viable. I did not say any neurons were viable. I’m merely saying that cryonics is provably better than dehydration or plastination because of this viability factor.
Despite this, IF patients frozen using current techniques can ever be revived, the techniques used will more than likely require a destructive scan of their brains, followed by loading into some kind of hardware or software emulator.
Trying to think of what this might subjectively be like is hard to view rationally. I don’t know if a good emulation or replica is the same person or not : you can make solid arguments either way.
Extremely advanced, better versions of cyronics might eventually reach the point of actually preserving the brain in a manner where reheating brings it back to life and a transplant is possible. However, a destructive scan and upload might still remain the safer choice.
Regardless of how the revivals were actually done in practice, if reproducible and public demonstrations of viability were every performed, I would expect that cryonics would gain widespread prevalence, mainstream acceptance, and become a standard medical procedure.
An afterlife doesn’t really solve the problems people want it to solve. For one thing, ghost hunters with cable reality series might bother you with inane requests like pushing buttons on flashlights. ; )
But more to the point, why do people assume that an “afterlife,” if it exists, has to last forever, or that you have to have one to give this life “meaning”? This shows uncritical, self-centered teleological thinking about human existence.
Ha! I love this. My wife is always watching those shows, and I find their assumptions rather inane: I can’t immediately explain this, so it must be paranormal.
False dichotomy: Cryonics may fail (actually, will probably fail) to revive you. Or it may succed, and then you die anyway.
It seems a quite optimistic estimate. Successful revival depends conjunctively on a large number of events, many of which are highly speculative (no damage from preservation, super duper nanotech) or outright implausible (cryo orgs not succumbing to organizational failure).
MNT isn’t strictly necessary. Anabolocytes, and other speculative genetically engineered cells. They are a little more likely than Freitas’ nanomedicine because, well, cells exist; which is not an argument that works for MNT.
There’s also whole-brain emulation, which doesn’t require nanotech to function—just slightly better scanners, substantially better neuroscience, and exponentially better computers.
We have plenty of models of neurons and some of them imitate neurons very well.
Eugene Izhikevich simulated an entire human brain equivalent with his model and he saw some pretty interesting emergent behaviour (Granted, the anatomy had to be generated randomly at every iteration, so we still need better computers).
That’s true, but we need to get it really, really close. Even relatively small statistical deviations from the behavior of the real neurons are probably intolerable. Besides, real neurons are not interchangeable: they have unique statistical biases and are influenced by a variety of factors not modeled by modern simulations, like neurotransmitter diffusion, glial activity, and subtle quirks of specific dendrites and axons.
Right now, even if you gave us a high-speed brain scanner, a high-speed computer, and an unlimited budget, we wouldn’t have the capability to interpret the image data the scanner produced, or even be quite sure which immunostains to use for the optical imaging to pin down the required details. I expect it to take at least five to ten years for us to get the theoretical details ironed out.
It requires substantially better scanners, and a fixation process that preserves all the relevant features.
Vitrification seems to work pretty well, in terms of preserving relevant details. Observing some of those features is going to require an as-yet-not-fully-understood immunostaining process, but that’s under neuroscience. As far as the scanners go, the resolution is already adequate or near-adequate for most SEM technologies. It’s just a question of adding more beams and developing more automated methods, so the scanning can be more parallel.
Do you have any reference?
According to PZ Myers you can only do that with exceptionally small samples of tissue.
PZ Meyers has unreasonably high standards for ‘relevant details.’ Demanding one millisecond total fixation time (with every atom being in precisely the same position as it was during life) is totally ridiculous. If you want to study intraneuron cell biology, sure, you need that, but for brain emulation, all you care about is the connection-ism of the network, and the long term statistical biases of particular neurons’ synaptic connections (plus glial traits, naturally), which is (probably) visible from features many orders of magnitude more durable than the kinds of data he’s talking about. Also, his comments about accelerating the speed of the network are kind of bizarrely ignorant, given how smart a guy he clearly is.
The only way the issues he mentions are problematic is if high-detail inter-neuron computing turns out to be necessary AND long-term state dependent, which the evidence suggests against (the blue brain project has produced realistic synchronized firing activity in a simulated neocortical column using relatively simple neuron models).
As far as a reference goes, there’s this study, in which they took a rat’s brain, vitrified it, and examined it at fine detail, demonstrating “good to excellent” preservation of gross cellular anatomy.
Well, he’s a developmental biologist specialized in the vertebrate nervous system.
One millisecond fixation time might be an excessive requirement, but in order to perform an emulation accurate enough to preserve the self, you will probably need much more detail than the network topology and some statistics. Synapses have fine surface features that may well be relevant, and neurons may have relevant internal state stored as DNA methylation patterns, concentrations of various chemical, maybe even protein folding states. Some of these features are probably difficult to preserve and possibly difficult to scan.
EDIT:
Actually they vitrified 475 micrometre slices of the hippocampus of rat brains. It’s no mystery that small samples can be vitrified without using toxic concentrations of cryoprotectants.
Moreover, the paper says: “Finally, all slices were transferred to the two wells of an Oslo-type recording chamber [ … ] and incubated with aCSF at 34–37 C for at least 1 h before being used in experiments.”
“Following initial incubation for 60 min or more at 35 C in aCSF to allow recovery from the shock of slice preparation, [ … ]”
I’m not a biologist so I might be missing something, but my understanding is that this means that somehow ischemia is not an issue here, while it certainly is when dealing with a whole brain.
The surface details we can read with SEM, and we can observe chemical/protein concentrations through immunostaining and sub-wavelength optical microscopy (SEM and SWOM hybrid is my bet for the technology we wind up using). I don’t think there’s strong evidence for DNA methylation or protein state being used for long-term data storage. If evidence arises, we’ll re-evaluate then. But modern neuron models don’t account for those, and, again, function realistically, so they’re not critical for the computation. The details we’re reading likely wouldn’t have to be simulated outright—they would just alter the shape of the probability distribution your simulation is sampling from. A lot of the fine stuff is so noisy, it isn’t practical to store data in it. The stuff we know is involved we can definitely preserve. As a general rule, if the data is lost within minutes of death, it’s probably also lost during the average workday.
I honestly don’t think cryoprotectant damage is anywhere near the big problem here. I’m sure it does cellular damage, but it seems to leave cell morphology essentially intact, and isn’t reactive enough to really screw up most of the things we know we have to care about, in terms of cell state. Ischemia is a bigger problem, and one of my points of skepticism about non-standby cryonics. Four plus hours at room temperature simply seems too long. That said, as our understanding of cell death improves, we’re starting to notice that most brain death seems to be failure of the cells’ oxygen metabolisms, not failure of synaptic linkings. I’d like to see studies done on exactly how long it takes relevant neural details to begin to break down at room temperature. That said, flatlining cases suggest that there’s some reason to hope for the time being. I’d like to see the science done, in any case.
What are these?
I never heard of them and Google doesn’t yield meaningful results.
A special type of teacher’s password.
Eudoxia is referencing Mike Darwin’s idea of modifying white blood cells with arbitrarily-sophisticated biotechnology (we’re talking “you can design new organelles to spec” as a lower-level requirement) to do active cell repair, sucking up cell contents and yoinking nuclear genetic information from even very-damaged cells before digesting the old contents and replacing them. It’s an elaborate thought experiment with technical-looking diagrams that elides huge black boxes in its proposed mechanism. Basically it’s the idea of nanomedicine before the term was coined.
The original dichotomy is correct if you think about the consequences of cryonic success.
IF and only if cryonics succeeds, the world had developed the technology to restore you from a cracked, solid mass of brain tissue. (the liquid nitrogen will fracture your brain because it cools it below the glass transition point)
Also, as sort of a secondary thing, it has figured out a way to give you a new body or a quality substitute. (it’s secondary because growing a new body is technically possible, if unethical, today)
Anyways, this technical capacity means that almost certainly the technology exists to make backup copies of you. If this is possible, it would be also possible to keep you alive for billions of years, or some huge multiple of your original lifespan that it could be approximated as infinite.
You might consider these technical capabilities to be absurd, and lower that 5% chance to some vanishingly small number like many cryonics skeptics. However, one conclusion naturally falls from the other.
We don’t know how to reliably clone a human being, and we definitely don’t know how to transplant your brain into it or attach your head to it.
We’ve done body transplants in primates in the past. Hooking up the nerves is still tricky, but we could probably figure it out. Also, cloning one mammal is basically like cloning another. There’s really no doubt we could clone a human being if we really wanted to. The trick is that current cloning mechanisms have a very high failure rate, and nobody wants to deal with the pile of dead babies and fetuses that would come out of such a process.
Realistically, though, 3d tissue printing is probably the way to go. We can already do several organs that way, and resolution is essentially the only limit to being able to fabricate most of the rest.
One team did one head transplant with one monkey in the 1960s (it is said to have survived a day and a half). Reattaching a completely severed spinal cord is still impossible, not “tricky”—all attempts at head transplants have produced quadriplegics.
Wouldn’t this be tantamount to regrowing a transected spine? I’m not up-to-date on that area, but I don’t think we can do that yet.
We can and we can’t. Here’s an 11 year old article where rats successfully regained function : http://www.jneurosci.org/content/21/23/9334.abstract
That’s just an example. I think that if society were far more tolerant of risks, and there was more funding, and the teams working on the problem were organized and led properly, then human patient successes would be seen in the near future.
Isn’t that the funny thing? We’ll take a certain loss over a risk of the same exact loss. Sigh.
Isn’t it closer to “take a certain loss over a risk of the same exact loss, plus a whole lot of money”?
Yes, that is part of it. I don’t think that the flat financial loss is the killer issue in many cases where an unproven method could work, or not. When doing nothing is acceptable, trying something becomes fraught with the risk of being blamed for the failure.
That’s a Pascal’s wager argument.
What? No. Pascal’s wager is when you apply the rules of instrumental rationality to epistemic rationality.
Simply being willing to take risks to possibly get a better outcome, without warping your beliefs, is not the same thing at all.
“Pascal’s wager” denotes several different fallacies, which are present in Pascal’s original argument.
Instrumentally, it refers to estimating expected utility based only on a possible outcome with an extremely large (positive or negative) payoff, without taking into account the fact that said outcome has an extremely small probability.
This is not quite right. The justification is that an action leading to certain negative consequences is not equivalent to inaction leading to the same consequences. Inaction is almost always acceptable, morally and legally. There are many obvious and non-obvious pitfalls in changing this attitude.
True when comparing one actions with a non-conjugate declining-to-act (e.g. throwing someone off a building vs not saving someone from falling off a building)
In this case, we’re looking at a fear of ineffectiveness—the case where acting could produce the same effect as not doing that exact same thing.
And yet, from a consequentialist standpoint, there shouldn’t be. Regardless of potential pitfalls, this is unlikely to change: I suspect it’s “hardwired” into our psychology. But there is also a reverse tendency, especially on the part of the public attitude towards leaders, where it is better to be seen to be doing something rather than nothing. Even if it is not clear what action should be taken.
Only if your reasoning is extremely reliable in estimating the consequences of your action or inaction. Otherwise you may end up doing more harm by acting than you would by inacting (happens all the time). I am guessing that this is a part of what keeps people from acting.
I meant in the future. I think severe spinal cord damage is still a little beyond us right now. Though with the progress we’re making with stem cells, I’d guess we’re likely to take some steps on that front in the near-ish future.
Perhaps, but I don’t think it’s so easy.
During embryonic development, the nervous system begins as a single strip of specialized ectoderm, the neural plate, which folds on itself to form the neural tube that later becomes the spinal chord and the brain, while nerves grow out of it towards the other parts of the body. It never happens that two separate pieces of neural tissues become attached.
AFAIK, If you inject stem cells in the severed spine of a rat and play with growth chemical signals, you may get the formation of new neural tissue that makes more or less random connections with the existing tissue which may recover some function (if it doesn’t cause cancer), but that doesn’t seem to be a precise process.
I wonder whether the lizard tail regeneration involves the extension of a functional spinal chord.
I agree, but I did not want to overstate the case, so I used an estimate already provided in the forums. I certainly did not want the discussion to become about how likely recovery from cryonics is, and I am fairly happy with the results.
The best option is to embrace permanent death. The success of cryogenics or other life preserving technologies would be disastrous for humanity. Already population estimates for the near future cause panic about resources and sustainability, without further population increase due to a decrease in death rate. It would only be feasible if the birth rate was cut severely by imposing policies such as that in China. As procreation and cultivation of family life are seen as integral parts of the human experience these policies would probably be unsuccessful. Even if they were successful would this really be desirable? We should accept the transient, cyclical nature of life on this planet.
That’s the “relinquish” option. The human race would not adjust sensibly to the new situation, so better to just not go there. But the argument that humanity can’t restrain itself or coordinate its actions can be applied from the other side: The human race will never relinquish life extension, too many people want to live, so we just have to make it work.
If humanity had the self-confidence and self-respect to seek immortality en masse, then we really would try to solve the other problems. The social contract would be, rejuvenation only if you have no more children. Outer space would be the safety valve. We’d fix a new average lifespan and organize to achieve equilibrium and sustainability around thousand-year lifespans rather than hundred-year lifespans. Something would be done.
But only a handful of people have that drive. So whether technological life extension comes to pass, will not be decided by humanity as a whole deliberately tilting in one direction or another. I think it’s predestined that it will come to pass, because the desire to live is stronger than the willingness to die as a moral example. If you just look at the roles that those two impulses play in human psychology, there’s no comparison, the will to live is much much stronger. So, though the human race doesn’t have enough will-to-live to be actively prioritizing longevity, it does have enough will-to-live to not outlaw it either. Only a traumatically overpopulated society would associate extended lifespan with suffering strongly enough to outlaw it.
The larger context here is that the human race is decoding the secrets of nature—the gene and the brain—and knowledge is power. Historically it’s a short step from science to technology. We are crossing those thresholds, from knowledge to power, just as surely as we are crossing all those environmental and population thresholds. And radical life extension, the ability to keep a human being in a state of youth and health for unnaturally long periods, is almost just an incidental consequence of this. You need to contemplate something like the full range of lifeforms on Earth, and then add science-fiction ideas about robots, smart machines, artificial life, and artificial intelligences, and then imagine that you could become any one of those lifeforms, while having all those machines working for you … to get an inkling of what that power should make possible.
That power will come into the possession of a minority. It may be a minority numbering in the millions, a globally distributed technology-using class which then becomes the de-facto posthuman technocracy of Earth. And it would then be the cultural politics and other internal politics of that technocracy which decides what becomes of the rest of us. They might keep Earth as a nature preserve for old-style humans, policed to ensure that we don’t independently reinvent the technologies of power. They might create a social order which permits natural humans to join the techno-aristocracy, but only after they have shown a readiness to be responsible, and a psychological capacity to be responsible. Such a transition might even require neurological intervention, to ensure the presence in the new posthuman of whatever norms are deemed essential within the technocracy.
Depending on how concentrated power becomes, there is the potential for far more coercion. At one extreme, ultimate power is wielded by “unfriendly” artificial intelligences that care nothing for humanity and who just pave the earth with a new machine ecology. Or the summit of the technocracy might be a faction who use humanity for their own amusement, or a cult who have some highly specific posthuman ideal to which we will all be made to conform.
A humane transhuman order does seem to be a possibility, but there’s no getting around the fact that it would have to contain elites possessing powers even greater than the national-security elites of today’s superpowers, with their nuclear weapons and global surveillance networks. In particular, a neurocratic elite for whom human nature is transparent and manipulable is implied. The neurocrats might dominate, or they might just be one segment of the technocracy. But they would be extremely important, because their tools and knowledge would be responsible for perpetuating the values that give their civilization its stability and its defining tendencies.
So Laurie, I think that’s the big picture. The future likely contains some combination of artificial intelligence and transhuman neurotechnocracy, that will regulate the bounds and the directions of posthuman civilization. How stable that will be, whether there will be competing power centers, how the story of that civilization could unfold (because it too would have a life, career, and death, it wouldn’t be a timelessly stable order), are all highly contingent on what happens in our immediate future. At least you found your way to one of the places that might make a difference; LW could become a sort of finishing school for future AI programmers and neurocrats. I don’t think you’ll convince them to relinquish their future power, that just means that someone else will seize the throne. But maybe you can shape their thinking. Who knows, with time you might even join them.
I don’t know what the reproduction rate would look like if there were no aging. It’s very hard to tell what proportion of people like raising children, or whether people would want to keep raising children after the first few, or how much they’d want to keep raising children if they won’t need anyone to take care of them as they get old. Or for that matter, whether raising children would be so much easier with future tech that the prospect would be more attractive for more people.
I didn’t say that investigation into the possibilities of life lengthening should be prevented or inhibited. One of our most primal urges is to survive at any cost. Obviously human curiosity for what is possible will lead us into these areas of research. However I think that if these methods proved viable to the extent you suggest (1000 year life spans) this would ultimately be ruinous for the human race. I think you agree here as the situation you describe as a likely outcome is by no means desirable:
In the definition of the “relinquish” option it argues that the prevention of certain technological investigations is characteristic of a totalitarian society. While I agree with that I have to say that the society you describe sounds worse. So maybe if what you describe was really a likely outcome we should think about the value of restrictive measures now for preventing much bigger infringements of rights in the future. The prospects you describe as inevitable are deeply depressing: an elite group of humans enforcing the complete subjugation of the rest of the planet.
You also dismiss the problem of over population, lack of resources, and colonisation of space despite the fact that these issues are no where near being solved now. If it was possible to colonise space on a massive scale this again I feel would be completely undesirable. Instead of looking for contingency plans we should confront the problems of global warming lack of food and energy sources directly.
Essentially I think the desire for a prolonged life span is born out of fear of death, eternity and uncertainty. This is a natural human reaction to death but in the end even if we could push death further and further away we would still have to face it at some point. Most people don’t need an extra 900 years of life. I don’t think we should allow our selfishness to corrupt the natural continuation of the human race.
I find that to be a stunningly selfish and elitist point of view. How do you know, and why are you deciding for them?
How?By “most people” I’m implying no one I’m not suggesting there are some people who deserve extra life and some who don’t. Sorry if that isn’t clear where I’m from that would be implicitly understood.
My point was that you’re thinking that you know enough to decide that for everyone. You’re also bringing in a concept of need (need for what?) when I think the relevant question is whether people want to live longer.
And I’m including myself in that no one too obviously! I don’t know whether I need to make that explicit but I just realised you said selfish as well as elitist.
To my mind, the selfishness was that you thought you could reasonably make that sort of decision for the whole human race.
Ok I don’t think I would use the word selfish in that context- arrogant or misguided maybe- so I wasn’t sure what you meant. Anyway a lot of people when they are nearing the end of life say that they are happy to accept death, many very old people even wish for death. Obviously people have regrets but I think you would have these regardless of life span. I’m not disputing the fact that many people do wish for immortality or massively increased life span.
I brought up the question of need because of the possible negative impact that the invention of immortality could have on the human race and the rest of the world. If it is just a question of desire then the possible consequences have to be more closely scrutinised than if it actually served a purpose.
Oops just replied to myself but meant to reply to you.
It’s not subjugation if all they do is prevent independent outbreaks of destructive technology.
I can see that I described four forms of posthuman technocracy, that we could call wilderness, meritocracy, exploiter, and cult. (Don’t hesitate to employ better names if you think of them.) Wilderness and meritocracy are the ones that you quoted and I can’t view them as bad, in fact they could be positively utopian. We’re talking about a world which has solved the crisis created by technology, by arriving at a social and cultural form that can employ technology’s enormous potential constructively.
Such a world has to ensure that the crisis isn’t recreated by independent reinvention, and it has to regulate its own access to techno-power. The “policing” is for the first part, and “entry exams for the techno-aristocracy” is for the second part. Do you find the prospect of even that much structure depressing, or is it just that you think the dystopian possibilities would inevitably come true?
We should distinguish between any exacerbation of existing problems due to longevity (your first list), and the problems that already exist and would continue to exist even without extra longevity (your second list). Just so we have names for things, I’ll call the second type of problem a “sustainability problem”; a technological solution to a sustainability problem, a “sustainability technology”; the first type of problem, an “exacerbation problem”; and a technology implicated in an exacerbation problem, a “destabilizing technology”. So we can call, for example, solar cells a sustainability technology, and stem cells a destabilizing technology. (These are precisely the sort of ultra-broad categories that can lead you astray, because under certain circumstances, solar cells might be destabilizing and stem cells might be sustainability-enhancing. But I felt the need for some extra vocabulary.)
So I summarize this part of your position as follows: we should prioritize the solution of sustainability problems, and avoid exacerbation problems by staying away from destabilizing technologies.
There’s a large number of sub-issues we could explore here, but I see the central fact as the failure of relinquishment as anything more than a local tactic that buys time. Someone somewhere will develop the destabilizing technologies. Independent development can’t be prevented or “policed” except by someone already possessing similar powers. So ultimately, someone somewhere must learn how to live with personally having access to destabilizing technologies, even if these powers are locked away most of the time and only brought out in a crisis. The various “posthuman technocracies” that I sketched are speculations about social forms that don’t keep advanced technology completely sealed away, but which nonetheless have some sort of resilience.
Well, the moral and existential dimension of these discussions is hotly contested. I focused on politics and pragmatism above, but I’ll try to return to the other topics in the next round.
You didn’t say destructive technologies you said “technologies of power” implying that the technological and scientific domains in this hypothetical future world would be stifled or non existent in order to prevent humans grasping power for themselves and threatening the reign of this “post human technocracy”. This sounds like subjugation to me. You also say they would “decide what would become of the rest of us”. This also sounds like complete totalitarian domination of the “old style human beings”. Considering that the ideal world for me would be an egalitarian society in which everyone is afforded the same indispensable rights and liberties, where the power balance is roughly equal from person to person or where the leadership is transparent in its governing and is not afforded liberties others are not this does not sound like a utopian society.
I’m not backing the prevention of investigation into these “destabilising technologies” as you have christened them. I think that the expansion of knowledge in every domain is desirable though we should work out better ways of managing the consequences of such knowledge in some cases. But I am kind of confused as to why you think that relinquishment now is unacceptable (not that i’m advocating that it is acceptable) but that in the future you would welcome the idea that humans would be policed in order to prevent the development of technologies that could threaten the existence of the elite post humans or any independent innovation. Isn’t that analogous to a hypothetical example of present day humans policing or preventing technologies that could threaten their existence in the future?However you reject this strategy as relinquishment. If you think this is an unsustainable method now then why do you think it will be a viable solution in the future? Do you think that it would be more acceptable because it would be easier to enforce in a more strictly policed and less tolerant society. Do you object to “relinquishment” on an ideological basis or a pragmatic basis?
Also I’m not sure we need the distinction between the two kinds of problems: global warming, lack of resources (food/ energy) and over population are all problems that would continue to exist without longevity and would be exacerbated by longevity. Colonisation of space is a separate issue, it isn’t a problem as such except in that it is not currently possible. So some may find it problematic in this respect.
I didn’t just mean political power or power over people. Consider the “power” to fly to another country or to talk with someone on the other side of the world. Science and technology produce a lot of powers like that. The advance of knowledge to the point that you could rejuvenate a person or revive someone from cryonics implies an enormous leap in that sort of power.
Total relinquishment doesn’t work because it requires absolutely everyone else to be persuaded of your view. If just one country opts out and keeps doing R&D, then relinquishment fails. But a society where advanced technology does already exist has some chance of controlling where and how it gets reinvented. Such a society could become overtly hierarchical and have no notion of equal rights.
But even a society with deep egalitarian values would need to find a way to assimilate these powers, rather than just renounce them, to remain in control of its own destiny. Otherwise it risks waking up one day to find that a different set of values are in charge, or just that someone played with matches and burned down the house.
One irony of technological power is that it offers ways for egalitarian values to survive, even in deeply destabilizing circumstances. A theme of the modern world is the fear of homemade WMDs. As knowledge of chemistry, biology, nanotechnology, robotics… advances, it becomes increasingly easy for an isolated lab to cook up a doomsday device. One response to that would be to just strip 99% of humanity of the power and the right to make a technology lab. The posthuman technocrats live in orbit and monitor the Earth to see that no-one’s in a cave reinventing rocketry, and everyone else goes back to hunting and gathering. Luddism and transhumanism reach a compromise.
Hopefully you can see that a luddite world with a small transhuman elite is more stable than a purely luddite world. The hunter-gatherers can’t do much about it if someone restarts the industrial revolution. This is why total reliquishment is impractical.
But what about transhuman egalitarianism? Is that workable? If anyone can make a doomsday lab, won’t it all come crashing down? This is why there’s a “neuro” in “neuro-technocracy”. Advanced technology’s doomsday problem is mostly due to malice (I want to destroy) or carelessness (I’m playing with matches). Malice and carelessness are psychological states. So are benevolence and competent caution. Human society already has a long inventory of ways, old and new, in which it tries to instil benevolence and competence in its members, in which it watches for sociopathy and mental breakdown, and tries to deal with such problems when they arise.
In a society with truly profound neuroscientific knowledge, all of that will be amplified too. It should be possible to make yourself smarter or more moral by something akin to increasing the density of neural connections in the relevant parts of your brain. I don’t mean that neural connectedness is seriously the answer to everything, it’s just a concrete example for the purposes of discussion. The idea is that there are properties of your brain which have an impact on the traits that determine whether you can or can’t be trusted with the keys to advanced technology.
For a culture that has absorbed advanced technology, neuroscience and neurotechnology can become one of the ways that it protects and propagates itself, alongside more traditional methods like education and socialization. This is yet another futurist frontier where there’s an explosion of possibilities which we hardly have the concepts to discuss. We still need to work through the cartoon examples, like spacepeople and cavepeople, in order to get a feel for how it could function, before we try to be more realistic.
So let’s go back to the scenario where we have high technology in orbit and a new stone age on Earth. I set that up as an example of a nonegalitarian but stable scenario. Now let’s modify the details a bit. What if stoners and spacers have a shared culture and it’s a matter of choice where you reside and how you live? All a stoner has to do is go to the local communication monolith and say, I want to join the spacers. As a member of solar-system civilization, they will have access to all those technologies that are forbidden on Earth. So a condition of solar citizenship might be, a regular and ongoing neuropsychological examination and tune-up, to ensure that bad or stupid impulses aren’t developing. Down on Earth they can be a little more lax, but solar society is full of dangerously powerful technologies, and so it’s part of the social contract that everyone stays morally and intellectually calibrated, to a degree that is superhuman by present-day standards. And when someone migrates from the stone age to the space age, it’s a condition of entry that they adopt space-age standards of character and behavior, and the personal practices which protect those standards.
That’s the cartoon version of benevolent space-age neurotechnocracy. :-)
That is a rather unpopular view here. The common position is one along the lines of “People should be able to live for as long as they wish.” And the response to concerns of overpopulation is often “Let’s hurry up and get ready to colonize space.” Or “Computer emulations would be a lot cheaper.” Or something along those lines.
Basically, just because immortality would be problematic in some respects, it doesn’t mean that we have to consider the systematic ending of human life to be an acceptable state of affairs, and it doesn’t mean we shouldn’t look for solutions to the death and overpopulation problem as a whole.
Isn’t good to have a multiplicity of viewpoints presented?
I find this point of view unsurprising as it reflects the greed and selfishness instilled in the population of the over indulgent western world today. Everything should be for sale, we should be able to have whatever we want, even though the consequences for the rest of humanity, the other species on this planet and the environment could be devastating. Death isn’t a problem to be overcome it’s the natural conclusion to life.
“Selfish” is more typically attached to those people who are okay with other people dying; not the people who are not okay with it.
I don’t know of any person here who wants immortality only for themselves and says to hell with everyone else. I suggest you actually read up on the actual views of the people in this community rather than strawmanning and making caricatures out of them.
You’re imbuing a moral quality to the word “natural” which isn’t actually there. By the same argument earthquakes and tsunamis are also not problems to be overcome, they’re the natural result of tectonic movement.
Nor is starvation a problem to be overcome, it’s the natural conclusion to the lack of sufficient food. Nor is infant mortality a problem to be overcome—after all women can make babies once every nine months, so it’s natural for so many infants to perish.
All problems to be overcome are “natural”. Until they’re solved, at which point they’re no longer natural, their solution is.
Selfish because if everyone on this planet chose to be immortal and continued to reproduce life on earth would be unsustainable unless major innovations in relation to problems such as the ones mentioned previously were realised. Even if they were the quality of life would inevitably be lower and eventually the human race would die out if the birth rate exceeded the death rate by such massive numbers. If reproduction was stopped it may be feasible but this would probably have to be controlled and enforced which I don’t agree with due to the reasons stated earlier.
I wasn’t actually addressing everyone in this community I was replying to a comment from one individual.
Maybe you’re right about that, although you’re stretching it a bit with the infant mortality argument. And I don’t think tsunamis are problems to be overcome I think we can only deal with the consequences. Starvation in many places is a problem due to over population which would only be exacerbated if people stopped dying. And I do think that quest for immortality is ultimately a selfish goal. When considering immortality do you honestly think “it would be great if i could live forever” or “it would be great if me and everyone on the planet now could live forever”? Maybe you do think the latter but I think if immortality was discovered tomorrow it would be concentrated in the hands of a rich elite who judge their lives to be more important than the rest of ours. After a while it would be sold to those wealthy enough to afford it. Just another way for the wealthy elitists to conserve their power. These people would control the world through the generations and decide their own world order. This would help to cause the stagnation of political ideas, social change and innovation.
Really? It was the chief method of population-control once upon a time, much like death by aging is now. They seem pretty analogous in most ways.
People were “selfish” back then to not want their infant babies die. People are the same sort of selfish now to not want to see their parents die.
Lack of death in both cases causes the same sorts of problems, but people adjust to problems. Fertility declined after infant mortality dropped—fertility per year will also be declined in people have their youthful years extended indefinitely.
Do you understand that you just made two contradictory arguments—before you said there will be overpopulation, because it will be given to all. Now you say it will be given only to some (so there’s no problem of overpopulation), but these few will create an elite.
Those are two opposite problems—which one do you believe will be the actual case?
How does that follow? In what way does medical immortality give these people greater powers of control than any current or medieval non-immortal dictator or monarchical dynasty?
Well actually I think you misunderstood me. The statement you’re basing your argument on is “Death isn’t a problem to be overcome it’s the natural conclusion to life.” I admit that I may have “imbued the word natural with moral weight”. However you are responding as if I had said “death is natural therefore it is desirable” which I did not and which would be a pretty meaningless statement to make. I merely used natural as an adjective, I could have used “inevitable” or “only” or many others instead. The adjective was obviously misplaced because it had unintended connotations in the context. I should have reread the comment more carefully.
In answer to your second point it depends when immortality was discovered. In the last example I said if the means to be immortal was discovered tomorrow. Obviously it is more likely that it would be discovered in hundreds or thousands of years when the world will probably be radically different to that of today. Therefore both are complete conjecture. Neither of us can know what the true impact on society would be. I think it will be negative for the reasons I’ve given, I’m not sure what your position is as you’ve not clearly stated it though I’m assuming you think the effects would be positive? It would be interesting to hear what your view s are.
Because they are able to maintain power for much longer. It is often when a dictator dies or is aging and infirm that their regimes are contested.
We don’t know what the true impact on society will be if medical immortality is discovered—but the thing is that the current impact on society of its lack is about 60 million deaths per year. A death toll of the scale of World War 2, every single year.
Can I condone such a death toll for reasons as uncertain as the fear of the possible formation of a immortal super-elite which will lead the rest of humanity to misery, or of the fear of overpopulation bringing misery, or of dictators lasting a bit longer in power than they otherwise would?
No, I can’t condone it. Yes, I’m sure lots of problems will arise if medical immortality is discovered. But as a rough calculation none of those problems is nearly certain enough to justify 60 million deaths per year in return. Your own calculations of this may be different.
--
As a sidenote your concept of just the elite becoming immortal isn’t an automatic dystopia either. If anything, I think that might make them a bit more responsible in evaluating the long-term consequences of their policies.
To ask the natural followup question:
If a Victorian thinker had challenged the appropriateness of medical advances like open heart surgery on the grounds that the Earth was dangerously close to carrying capacity, would you be persuaded that medical researchers were selfish or misguided.
Putting it slightly differently, there’s not yet a compelling case that the current average human lifespan or carrying capacity of the planet are set in stone by physical laws. Human science has increased both those numbers many times throughout history.
If you want to make the argument that a counter-factual world with 3 billion humans, all living the American standard of living would be more moral than the current setup, that’s a respectable position. But there’s substantial difficulty getting from here to there. If we’re both wishing for pie-in-the-sky, doesn’t it seem more pleasant to wish for the world of 6 billion sustainably American standard of living, and trying to think of how to get to that outcome?
No because life saving procedures are a different matter to procedures that ensure immortality which would effectively cut the death rate in a hypothetical situation where everyone in the world had access to them. My point is I don’t think this would be sustainable/ it would lead to dire consequences for the human race. As I mentioned to Mitchell Porter I didn’t say that experimentation in this area should be prevented I just think that it is not a desirable road for humanity in the event of success.
Two points:
1) I’m doubtful that the distinction between lifesaving procedures and immortality will end up being a clear distinction. I’m optimistic that humanity will eventually have the capability to do things like replace lost limbs with new limbs. Once we have that level of capacity, most of the modern causes of death go away—if you survive to reach the hospital, you’re likely to be able to leave basically good as new.
2) Given our current levels of technology, Western-level standards of living for everyone are not sustainable. Nor do there appear to be imminent technological advances that would make that sustainable. But radical life-extension technology (of whatever form) is also nowhere near imminent. Why do you think that the kind of technology advances you dislike are closer to achievement than carrying-capacity advances?
Some of your comments suggest that you would oppose carrying capacity increases (like colonizing other planets) even if they were within humanities capacity because these technological capacities would be bad for humanity. Assuming you are correct that these technological revolutions would fundamentally change human society, why are these hypothetical changes worse than the changes caused by developments like selective breeding of livestock, practical steam engines, cell phones, the Internet, or agriculture itself?
I’m not sure that regrowing limbs is much like rejuvenation. Most people die of aging, not accidents.
Wait I don’t think I said I “dislike” any technological advances. I’m not opposing the investigation into life preserving technology and I would be greatly impressed if a “cure for death” was discovered I just think that the effect on human life would ultimately be negative. I said the idea of colonising space would be depressing to me. By colonising space I don’t mean living on other planets as right now that is an impossibility, I mean living in space ships. This would in no way be comparable to living on planet earth and the psychological implications of remaining in an enclosed space for such a long period of time would be great. Even if these and other practical limitations could be overcome, I find this idea disagreeable on an emotional/ aesthetic level. I find the idea of leaving the beauty of the natural world in favour of a simulated reality within a spacecraft deeply sad. Particularly if this was a result of the irreparable destruction of planet earth rendering it uninhabitable for humans. Regarding the question of technological revolutions you mention there have been many that have had an extremely negative impact on human society and the world itself, the most obvious being the utilisation of fossil fuels for energy sources. The examples you mention are pretty benign but in the case of agriculture: http://www.guardian.co.uk/global-development/2012/aug/26/food-shortages-world-vegetarianism?INTCMP=SRCH Also : “Meat production accounts for about 5% of global CO2 emissions, 40% of methane emissions and 40% of various nitrogen oxides. If meat production doubles, by the late 2040s cows, pigs, sheep and chickens will be responsible for about half as much climate change impact as all the world’s cars, trucks and aircraft.”(Guardian)
Human beings have engineered many impressive innovations in technology and science, unfortunately these have also had some terrible side effects that we need to overcome as soon as possible.
Sure—but this sort of reaction is historically contingent—our culture could have developed such that you would feel differently. These sorts of judgments are very fluid over time—what the Victorians found aesthetic was different that what the Romans found aesthetic is different from us. This fluidity makes it very hard to tell when the judgments should be taken seriously. Whereas we know that almost all technological advances reduced poverty.
Even as a believer in AGW, I’m pretty confident that the Industrial Revolution (which started with coal and moved to oil) was a net benefit to human happiness. Separately, it wouldn’t surprise me at all if the were a near term rise in the incidence of vegetarianism in the West for food shortage reasons. (Food is a zero-sum game: There’s a finite amount of energy per time that Earth receives from the Sun. Every calorie spent digesting grass to build cow bone is a calorie that can’t sustain a human).
I would rather be immortal and limited to two children for all of my immortal lifespan than die.