The first part was good. The ending seems to be making way too many assumptions about other people’s motivations.
Consider that in a 2016 survey of Less Wrong users, only 48 of 1,660 or 2.9% of respondents answering the question said that they were “signed up or just finishing up paperwork” for cryonics. [Argument from authority here]. While this is certainly a much higher portion than the essentially 0% of Americans who are signed up for cryonics based on published membership numbers, it is still a tiny percentage when considering that cryonics is the most direct action one can take to increase the probability of living past one’s natural lifespan.
First off, this last sentence is probably wrong. The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.
This objection is consistent with the fact that 515 or 31% of respondents to the question answered that they “would like to sign up,” but haven’t for various reasons. Beyond that, when asked “Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?”, 71% of respondents answered yes or maybe.
I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it’s not surprising that a majority haven’t signed up for it. It’s also very misleading how you group the “would like to” responses. 20% said they would like to but can’t because it’s either not offered where they live or they can’t afford it. The relevant number for your argument is the 11% who said they would like to but haven’t got around to it.
If a reliable and trustworthy source said that for the entire day, a major company or government was giving out $100,000 checks to everyone who showed up at a nearby location, what would be the rational course of action?
This example is exactly backwards for understanding why people don’t agree with you about cryonics. Cryonics is very expensive and unlikely to work (right now), even in ideal scenarios (and I’m pretty sure that 10% median is for “will Alcor’s process work at all”, not, “how likely are you to survive cryonics if you die in a car crash thousands of miles away from their facility”).
Any course of action not involving going down and collecting the $100,000 would likely not be rational.
Ignoring opportunity cost and motivations. If someone wants $100,000 more than whatever else they could be doing with that time, then yes. But as we see above, not everyone agrees that a tiny, tiny chance of living longer is worth (the opportunity cost of) hundreds of thousands of dollars.
And I should point out, I personally think cryonics is very promising and should be getting a lot more research funding than it does (not to mention not being so legally difficult), but I think the probability of it working in common cases like not dying inside Alcor’s facility right now is very low.
The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.
I said cryonics was the most direct action for increasing one’s lifespan beyond the natural lifespan. The things you list are certainly the most direct actions for increasing your expected lifespan within its natural bounds. They may also indirectly increase your chance of living beyond your natural lifespan by increasing the chance you live to a point where life extension technology becomes available. Admittedly, I may place the chances of life extension technology being developed in the next 40 years lower than many less wrong readers.
With regards to my use of the survey statistics. I debated the best way to present those numbers that would be both clear and concise. For brevity I chose to lump the three “would like to” responses together because it actually made the objection to my core point look stronger. That is why I said “is consistent with”. Additionally, some percentage of “can’t afford” responses are actually respondents not placing a high enough priority on it rather than being literally unable to afford it. All that said, I do agree breaking out all the responses would be clearer.
I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it’s not surprising that a majority haven’t signed up for it.
I think this may be a failure to do the math. I’m not sure what chance I would give cryonics of working, but 10% may be high in my opinion. Still, when considering the value of being effectively immortal in a significantly better future even a 10% chance is highly valuable.
I wrote “Any course of action not involving going down and collecting the $100,000 would likely not be rational.” I’m not ignoring opportunity costs and other motivations here. That is why I said “likely not be rational”. I agree that in cryonics the opportunity costs are much higher than in my hypothetical example. I was attempting to establish the principle that action and belief should generally be in accord. That a large mismatch, as appears to me to be the case with cryonics, should call into question whether people are being rational. I don’t deny that a rational agent could genuinely believe cryonics might work but place a low enough probability on it and have a high enough opportunity cost that they should choose not to sign up.
I’m glad to hear you think cryonics is very promising and should be getting a lot more research funding than it does. I’m hoping that perhaps I will be able to make some improvement in that area.
I find your statement about the probability of cryonics not working in common cases being low interesting. Personally, it seems to me that the level of technology required to revive a cryonics patient preserved under ideal conditions today is so advanced that even patients preserved under less than ideal conditions will be revivable too. By less than ideal conditions I mean a delay of some time before preservation.
I chose actions that will increase your lifespan in general, since that’s strictly better than increasing the chance that if you live long enough for it to matter, you will live longer than your natural lifespan.
Evaluating the expected value of cryonics is hard because it runs into the same problem as Pascal’s Wager, with a huge value in a lowe probability case. I’m not really sure how to handle that.
The reasons I don’t think it’s likely to work right now are:
Current processes may not preserve human sized brains well at all even in ideal conditions (successful cryonics experiments seem to involve animals much smaller than our brains)
Alcor may not do the preservation perfectly
The technology to reconstruct our brains from frozen ones may not be possible or might be so far off that the brain is damaged before it becomes possible
Alternately, you could use whole body preservation, but then the problems in my first point are significantly worse.
In non ideal conditions, your brain is dead and breaking down, and losing information permanently. A sufficiently powerful AI might be able to make reasonable guesses, but it’s not clear how much the person they create would really be you after extensive damage.
And this last one brings up my first point again: if I want to not die, it’s much more effective to drive safely (or not drive), get adequate medical care, exercise, etc. than to focus in the small chance of surviving after my body is already dying.
I liked overall the opening salvo of this new blog.
I’ll add two considerations:
1 - the point of a map is to have blank spaces or omissions (a 1:1 chart would be of no use). A map is useful precisely when it’s incomplete, but only when those blanks are covering inessential feature of the territory;
2 - the best approximation to the art of perfect rationality that we currently have is Bayesian probability, and it teaches us that to arrive to a correct belief, we do not only have to account for the evidence and reason correctly, but we should also start from prior beliefs that aren’t too far from the truth already. I think this aspect is often overlooked when talking about instrumental rationality, but is essential: I can account for the correct evidence, but if my belief in medical resurrection is very low from the beginning, then it is rational not to sign for cryionics.
I’ve since responded to Mitchell Porter’s comment. For the benfit of less wrong readers, my reply was:
For many questions in philosophy the answers may never be definitively known. However, I am saying that we know many answers to these questions that are very likely false based on the evidence and some properties that the answers should have. Others of these questions can be dissolved.
For example, epistemological solipsism can probably never be definitively rejected. Nevertheless realism, of at least some aspects of reality, is well supported and should probably be accepted. In the area of religion, we can say the evidence discredits all historical religions. That any answer to the question of religion much accord with the lack of evidence for the existence or intervention of God. Thus leading to atheism and certain flavors of agnosticism and deism. Questions of free will should probably be dissolved by recognizing the scientific evidence for the lack of free will while explaining the circumstances under which we perceive ourselves to have free will. Finally, moral theories should embody some form of moral nihilism properly understood. That is to say, that morality does not exist in the territory, only in the maps people have of the territory. Hopefully I’ll have the time to write on all of these topics eventually.
In acknowledging the limits of what answers we can give to the great questions of morality, meaning, religion, and philosophy let us not make the opposite mistake of believing there is nothing we can say about them.
My suggestion would be that your certainty about a few subjects are based on certain assumptions and selective research. In more detail:
In the area of religion, we can say the evidence discredits all historical religions.
That, I would propose, is not true. The scientific evidence discredit religious texts if taken as scientific texts. Religion has a dogmatic element that is tightly linked to the culture in which it was introduced and which dies as the culture changes and an inner element that is universal and points to certain facts about human nature.
I know that sounds like a bit too much but it is, in my opinion, evident and you can confirm it for yourself by suspending your assumptions and deeply studying multiple religious and mystical texts. I know it is unlikely as you might, based on your current belief system, conclude it is not worth the time. My proposition would be that we are in most need of studying material that we do not agree with as this is where our biases stem from.
Because of our militant attitude towards religion I would suggest starting from a quite abstract exposition of the core in the Tao Te Ching and the Upanishads. These are of course ancient texts and you should take that into account. They are not meant for us but are interesting as historical evidence. When you start understanding you can see how it appears in all the significant religions in different form. None of it contradicts established scientific truths.
For a modern exposition, and to really learn, you could study the works of Idries Shah. All of them. For a few years...
The problem with all these is related to the one I tried to outline in my post Too Much Effort | Too Little Evidence. Making the suggestions I am making I sound like a know-it-all and it is impossible to convince you to take a few years of your life properly studying the material. At the very least maybe we can agree that the best approach to learning is to be able to balance on the edge between doubt and belief. I will leave it to you to decide if this is what you are currently doing.
Finally, moral theories should embody some form of moral nihilism properly understood. That is to say, that morality does not exist in the territory, only in the maps people have of the territory. Hopefully I’ll have the time to write on all of these topics eventually.
It is interesting that you used the word ‘should’. At least you have to admit that the argument is not settled. Everything I wrote above applies to this statement. What you are missing in order to acknowledge the existence of valid alternative hypothesis’ is in the place you have decided it is beneath you to look.
Initial bits about evo-psych for epistemological development in humans ring true to me. I also liked the middle bit about map errors and how we might correct them. The ending bit with LessWrong as an example fell a little more flat for me. Overall, excited to see a new blog in the rationality space.
The first part was good. The ending seems to be making way too many assumptions about other people’s motivations.
First off, this last sentence is probably wrong. The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.
I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it’s not surprising that a majority haven’t signed up for it. It’s also very misleading how you group the “would like to” responses. 20% said they would like to but can’t because it’s either not offered where they live or they can’t afford it. The relevant number for your argument is the 11% who said they would like to but haven’t got around to it.
This example is exactly backwards for understanding why people don’t agree with you about cryonics. Cryonics is very expensive and unlikely to work (right now), even in ideal scenarios (and I’m pretty sure that 10% median is for “will Alcor’s process work at all”, not, “how likely are you to survive cryonics if you die in a car crash thousands of miles away from their facility”).
Ignoring opportunity cost and motivations. If someone wants $100,000 more than whatever else they could be doing with that time, then yes. But as we see above, not everyone agrees that a tiny, tiny chance of living longer is worth (the opportunity cost of) hundreds of thousands of dollars.
And I should point out, I personally think cryonics is very promising and should be getting a lot more research funding than it does (not to mention not being so legally difficult), but I think the probability of it working in common cases like not dying inside Alcor’s facility right now is very low.
I said cryonics was the most direct action for increasing one’s lifespan beyond the natural lifespan. The things you list are certainly the most direct actions for increasing your expected lifespan within its natural bounds. They may also indirectly increase your chance of living beyond your natural lifespan by increasing the chance you live to a point where life extension technology becomes available. Admittedly, I may place the chances of life extension technology being developed in the next 40 years lower than many less wrong readers.
With regards to my use of the survey statistics. I debated the best way to present those numbers that would be both clear and concise. For brevity I chose to lump the three “would like to” responses together because it actually made the objection to my core point look stronger. That is why I said “is consistent with”. Additionally, some percentage of “can’t afford” responses are actually respondents not placing a high enough priority on it rather than being literally unable to afford it. All that said, I do agree breaking out all the responses would be clearer.
I think this may be a failure to do the math. I’m not sure what chance I would give cryonics of working, but 10% may be high in my opinion. Still, when considering the value of being effectively immortal in a significantly better future even a 10% chance is highly valuable.
I wrote “Any course of action not involving going down and collecting the $100,000 would likely not be rational.” I’m not ignoring opportunity costs and other motivations here. That is why I said “likely not be rational”. I agree that in cryonics the opportunity costs are much higher than in my hypothetical example. I was attempting to establish the principle that action and belief should generally be in accord. That a large mismatch, as appears to me to be the case with cryonics, should call into question whether people are being rational. I don’t deny that a rational agent could genuinely believe cryonics might work but place a low enough probability on it and have a high enough opportunity cost that they should choose not to sign up.
I’m glad to hear you think cryonics is very promising and should be getting a lot more research funding than it does. I’m hoping that perhaps I will be able to make some improvement in that area.
I find your statement about the probability of cryonics not working in common cases being low interesting. Personally, it seems to me that the level of technology required to revive a cryonics patient preserved under ideal conditions today is so advanced that even patients preserved under less than ideal conditions will be revivable too. By less than ideal conditions I mean a delay of some time before preservation.
I chose actions that will increase your lifespan in general, since that’s strictly better than increasing the chance that if you live long enough for it to matter, you will live longer than your natural lifespan.
Evaluating the expected value of cryonics is hard because it runs into the same problem as Pascal’s Wager, with a huge value in a lowe probability case. I’m not really sure how to handle that.
The reasons I don’t think it’s likely to work right now are:
Current processes may not preserve human sized brains well at all even in ideal conditions (successful cryonics experiments seem to involve animals much smaller than our brains)
Alcor may not do the preservation perfectly
The technology to reconstruct our brains from frozen ones may not be possible or might be so far off that the brain is damaged before it becomes possible
Alternately, you could use whole body preservation, but then the problems in my first point are significantly worse.
In non ideal conditions, your brain is dead and breaking down, and losing information permanently. A sufficiently powerful AI might be able to make reasonable guesses, but it’s not clear how much the person they create would really be you after extensive damage.
The leading causes of death for people aged 15-34 are injury, suicide, and homicide. All of those have a might chance of involving trauma to the head, which makes things much worse. For example, someone who dies in a car crash is probably not going to get much value from cryonics. https://www.cdc.gov/injury/images/lc-charts/leading_causes_of_death_age_group_2014_1050w760h.gif
And this last one brings up my first point again: if I want to not die, it’s much more effective to drive safely (or not drive), get adequate medical care, exercise, etc. than to focus in the small chance of surviving after my body is already dying.
I liked overall the opening salvo of this new blog.
I’ll add two considerations:
1 - the point of a map is to have blank spaces or omissions (a 1:1 chart would be of no use). A map is useful precisely when it’s incomplete, but only when those blanks are covering inessential feature of the territory;
2 - the best approximation to the art of perfect rationality that we currently have is Bayesian probability, and it teaches us that to arrive to a correct belief, we do not only have to account for the evidence and reason correctly, but we should also start from prior beliefs that aren’t too far from the truth already.
I think this aspect is often overlooked when talking about instrumental rationality, but is essential: I can account for the correct evidence, but if my belief in medical resurrection is very low from the beginning, then it is rational not to sign for cryionics.
There is a comment from Mitchell Porter that hasn’t been addressed for a few days. I find it quite relevant so I will repeat it here.
Are you saying that these “answers” are already known?
I think it is a fair point that the sentence (and other parts of the post) implies knowledge of the answers.
I’ve since responded to Mitchell Porter’s comment. For the benfit of less wrong readers, my reply was:
Thank you for posting the answer :)
My suggestion would be that your certainty about a few subjects are based on certain assumptions and selective research. In more detail:
That, I would propose, is not true. The scientific evidence discredit religious texts if taken as scientific texts. Religion has a dogmatic element that is tightly linked to the culture in which it was introduced and which dies as the culture changes and an inner element that is universal and points to certain facts about human nature.
I know that sounds like a bit too much but it is, in my opinion, evident and you can confirm it for yourself by suspending your assumptions and deeply studying multiple religious and mystical texts. I know it is unlikely as you might, based on your current belief system, conclude it is not worth the time. My proposition would be that we are in most need of studying material that we do not agree with as this is where our biases stem from.
Because of our militant attitude towards religion I would suggest starting from a quite abstract exposition of the core in the Tao Te Ching and the Upanishads. These are of course ancient texts and you should take that into account. They are not meant for us but are interesting as historical evidence. When you start understanding you can see how it appears in all the significant religions in different form. None of it contradicts established scientific truths.
For a modern exposition, and to really learn, you could study the works of Idries Shah. All of them. For a few years...
The problem with all these is related to the one I tried to outline in my post Too Much Effort | Too Little Evidence. Making the suggestions I am making I sound like a know-it-all and it is impossible to convince you to take a few years of your life properly studying the material. At the very least maybe we can agree that the best approach to learning is to be able to balance on the edge between doubt and belief. I will leave it to you to decide if this is what you are currently doing.
It is interesting that you used the word ‘should’. At least you have to admit that the argument is not settled. Everything I wrote above applies to this statement. What you are missing in order to acknowledge the existence of valid alternative hypothesis’ is in the place you have decided it is beneath you to look.
Initial bits about evo-psych for epistemological development in humans ring true to me. I also liked the middle bit about map errors and how we might correct them. The ending bit with LessWrong as an example fell a little more flat for me. Overall, excited to see a new blog in the rationality space.