Are there any sort of in-depth analysis of the cost/benefit of cryonics? I’m not convinced its the best use of ones money, considering that the money spent could be given to charities to improve the world now, versus the very tiny chance you are preserving your life. The immediate benefit of helping others now seems to considerably outweigh the selfish act of self-preservation, considering that if you can afford cryonics currently, you already have excess money that could be used now for charity.
However, I am relatively new to the topic, so I am certain there are a whole host of issues I am ignorant of and I don’t mean to set up a false dichotomy, which is why I ask my original question.
If you value all human life (including your own) equally, it’s not the best use of ones money. But holding constant the amount of money you spend on yourself, cryonics might make for an excellent investment. I’m an Alcor whole-body member.
Sometimes people will argue that if you would pay a lot to save your own life from a fatal illness, that means you don’t value lives equally but prefer your own, and therefore you should sign up for cryonics. But this argument seems a bit problematic to me, because it assumes my preference to save my life in the case of the fatal illness is ideal. In reality it might not be ideal at all. I am certainly not Zachary Baumkletterer, but it’s likely I would be a better person if I were. If this is the case, the problem is not that I am unwilling to sign up for cryonics, but that I would pay to save myself from the fatal illness instead of giving the money away. And this argument does not mean that if I don’t want to sign up for cryonics, instead I have to start donating all my money to charity. It just means I am doing the best I feel that I can, and if I signed up for cryonics I would be doing even worse (by doing less for others.)
I do nice things for myself not because I have deep-seated beliefs that doing nice things for myself is the right thing to do, but because I feel motivated to do nice things for myself.
I’m not sure that I could avoid doing those things for myself (it might require willpower I do not have) or that I should (it might make me less effective at doing other things), or that I would want to if I could and should (doing nice things for myself feels nice).
But if we invent a new nice thing to do for myself that I don’t currently feel motivated to do, I don’t see any reason to try to make myself do it. If it’s instrumentally useful, then sure: learning to like playing chess means means that my brain gets exercise while I’m having fun.
With cryonics, though? I could try to convince myself that I want it, and then I will want it, and then I will spend money on it. I could also leave things as they are, and spend that money on things I currently want. Why should I want to want something I don’t want?
When I ran the numbers, I came up with a change from 50% to 55% on my odds of surviving to the year 2100. It’s definitely not much, but I deemed it worthwhile. It’s also substantially more than the gain I would get if I were to divert that money towards a charity like the SENS organization, even though donating to SENS would almost certainly be a higher global optimum.
No, I don’t have the previous calculations around anymore. I’ll probably be redoing them in the next couple of years to make sure it’s still worthwhile.
Is that 50-55% estimate conditional on no civilizational collapse or extinction event? Either way, it seems very optimistic. According to current actuarial estimates, a 30 year-old has about a 50% chance of living another 50 years. For life expectancy to dramatically increase, a lot of things have to fall into place over the next half-century. If you think anti-aging tech will be available in 30 years, consider how medicine has advanced in the past 30. Unless there are significant breakthroughs, we’re sunk. I’m signed up for cryo and I donate to SENS, but my estimates are much more pessimistic than yours.
I believe I used a fairly small number for civilizational collapse and extinction, on the order of ten to fifteen percent. I just don’t find such doomsday scenarios that likely or plausible.
It may be that my background and upbringing have inured me to it—I’ve seen the end of the world not happen far too many times in my lifetime:
Communists failed to conquer everyone
There was no nuclear war/nuclear winter with russia
The UN new world order didn’t enslave everyone
The end times due to the second coming of christ haven’t happened at least a dozen times
y2k didn’t cause problems
2012 was just retarded
There has been no superflu
The stock market has become more stable over time, not less
Peak oil happened and nobody even noticed
There was no hard AI takeoff
There’s probably more if I stop to think about it.
At the moment, I find biotech to be the most likely existential threat, with general civilization collapse and strong AI the next two major candidates.
The immediate benefit of helping others now seems to considerably outweigh the selfish act of self-preservation
Actually, I believe there is an interesting case to be made that brain preservation has immense public goods value.
The actual process of future resurrection—if possible—will revolve around statistical inference; it will necessarily involve a large amount of informed simulation/induction on the part of future AI.
The human cortex contains a model of the universe from the perspective of one observer, and other humans/agents are the most complex objects our brains must model. So the key information content of one particular human mind is not localized to a particular brain—it is instead distributed across many brains.
I mean that the physical information which defines—or alternatively is required to reconstruct—a human mind is not strictly localized in space to the confines of a single brain.
Using the hardware/software analogy, the brain is the hardware, the mind is the software, but the mind is distributed software: each mind program runs mainly on a single brain, but it also has partial cached copies distributed on other brains.
For example, if two people spend a bunch of time together, they are going to have many shared memories. Later if both die and the brain of one is preserved, the shared memories are useful for constructing both minds. With many preserved brains, you get multiple viewpoints for many overlapping memories which allow for more precise reconstruction.
Are there any sort of in-depth analysis of the cost/benefit of cryonics? I’m not convinced its the best use of ones money, considering that the money spent could be given to charities to improve the world now, versus the very tiny chance you are preserving your life. The immediate benefit of helping others now seems to considerably outweigh the selfish act of self-preservation, considering that if you can afford cryonics currently, you already have excess money that could be used now for charity.
However, I am relatively new to the topic, so I am certain there are a whole host of issues I am ignorant of and I don’t mean to set up a false dichotomy, which is why I ask my original question.
If you value all human life (including your own) equally, it’s not the best use of ones money. But holding constant the amount of money you spend on yourself, cryonics might make for an excellent investment. I’m an Alcor whole-body member.
Sometimes people will argue that if you would pay a lot to save your own life from a fatal illness, that means you don’t value lives equally but prefer your own, and therefore you should sign up for cryonics. But this argument seems a bit problematic to me, because it assumes my preference to save my life in the case of the fatal illness is ideal. In reality it might not be ideal at all. I am certainly not Zachary Baumkletterer, but it’s likely I would be a better person if I were. If this is the case, the problem is not that I am unwilling to sign up for cryonics, but that I would pay to save myself from the fatal illness instead of giving the money away. And this argument does not mean that if I don’t want to sign up for cryonics, instead I have to start donating all my money to charity. It just means I am doing the best I feel that I can, and if I signed up for cryonics I would be doing even worse (by doing less for others.)
Yes, this, exactly.
I do nice things for myself not because I have deep-seated beliefs that doing nice things for myself is the right thing to do, but because I feel motivated to do nice things for myself.
I’m not sure that I could avoid doing those things for myself (it might require willpower I do not have) or that I should (it might make me less effective at doing other things), or that I would want to if I could and should (doing nice things for myself feels nice).
But if we invent a new nice thing to do for myself that I don’t currently feel motivated to do, I don’t see any reason to try to make myself do it. If it’s instrumentally useful, then sure: learning to like playing chess means means that my brain gets exercise while I’m having fun.
With cryonics, though? I could try to convince myself that I want it, and then I will want it, and then I will spend money on it. I could also leave things as they are, and spend that money on things I currently want. Why should I want to want something I don’t want?
You might be able to achieve significantly better life outcomes for yourself by becoming more strategic.
When I ran the numbers, I came up with a change from 50% to 55% on my odds of surviving to the year 2100. It’s definitely not much, but I deemed it worthwhile. It’s also substantially more than the gain I would get if I were to divert that money towards a charity like the SENS organization, even though donating to SENS would almost certainly be a higher global optimum.
No, I don’t have the previous calculations around anymore. I’ll probably be redoing them in the next couple of years to make sure it’s still worthwhile.
Is that 50-55% estimate conditional on no civilizational collapse or extinction event? Either way, it seems very optimistic. According to current actuarial estimates, a 30 year-old has about a 50% chance of living another 50 years. For life expectancy to dramatically increase, a lot of things have to fall into place over the next half-century. If you think anti-aging tech will be available in 30 years, consider how medicine has advanced in the past 30. Unless there are significant breakthroughs, we’re sunk. I’m signed up for cryo and I donate to SENS, but my estimates are much more pessimistic than yours.
I believe I used a fairly small number for civilizational collapse and extinction, on the order of ten to fifteen percent. I just don’t find such doomsday scenarios that likely or plausible.
It may be that my background and upbringing have inured me to it—I’ve seen the end of the world not happen far too many times in my lifetime:
Communists failed to conquer everyone
There was no nuclear war/nuclear winter with russia
The UN new world order didn’t enslave everyone
The end times due to the second coming of christ haven’t happened at least a dozen times
y2k didn’t cause problems
2012 was just retarded
There has been no superflu
The stock market has become more stable over time, not less
Peak oil happened and nobody even noticed
There was no hard AI takeoff
There’s probably more if I stop to think about it.
At the moment, I find biotech to be the most likely existential threat, with general civilization collapse and strong AI the next two major candidates.
Actually, I believe there is an interesting case to be made that brain preservation has immense public goods value.
The actual process of future resurrection—if possible—will revolve around statistical inference; it will necessarily involve a large amount of informed simulation/induction on the part of future AI.
The human cortex contains a model of the universe from the perspective of one observer, and other humans/agents are the most complex objects our brains must model. So the key information content of one particular human mind is not localized to a particular brain—it is instead distributed across many brains.
I’m not sure I understand. What do you mean when you say this:
Are you saying that that the universe is part all minds, not just one particular persons? Can you explain what you mean more clearly?
I mean that the physical information which defines—or alternatively is required to reconstruct—a human mind is not strictly localized in space to the confines of a single brain.
Using the hardware/software analogy, the brain is the hardware, the mind is the software, but the mind is distributed software: each mind program runs mainly on a single brain, but it also has partial cached copies distributed on other brains.
For example, if two people spend a bunch of time together, they are going to have many shared memories. Later if both die and the brain of one is preserved, the shared memories are useful for constructing both minds. With many preserved brains, you get multiple viewpoints for many overlapping memories which allow for more precise reconstruction.
I’m a little disturbed by the thought of reconstructing my personality from others’ impressions of my personality.