Incidentally, heart transplants and cryonics both cost about the same amount of money… does the “it’s selfish” argument also apply to getting a heart transplant?
Getting a heart transplant has instrumental value that cryonics does not.
A heart transplant enables the recipient to continue being a productive member of society. If the recipient is doing a lot to help other people then the cost of the heart transplant is easily outweighed by the recipients’ productivity.
By way of contrast, if society gets to the point where cryopreserved people can be restored, it seems likely that society will have advanced to the point where such people are much less vital to society.
Also, the odds of success for a heart transplant are probably significantly higher than the odds of success for cryorestoration.
Edit: See a remark in a post by Jason Fehr at the GiveWell Mailing List:
Think of Bill Clinton, who has now had a heart bypass as well as a cardiac catheterization at age 63. The world will almost certainly be better off having Bill Clinton around for a few more decades running his foundation, thanks to all that cardiovascular research we’ve been discussing.
I don’t think that having Bill Clinton cryopreserved would be nearly as valuable to society as the cardiovascular operations that he underwent were.
If the recipient is doing a lot to help other people then the cost of the heart transplant is easily outweighed by the recipients’ productivity.
So, then, should prospective heart transplant recipients have to prove that they will do enough with their remaining life to benefit humanity, in order for the operation to be approved?
I think you’re holding cryonics to a much higher standard than other expenditures.
should prospective heart transplant recipients have to prove that they will do enough with their remaining life to benefit humanity, in order for the operation to be approved?
Distinguish personal morality from public enforcement. In a liberal society our personal purchases should (typically) not require anyone else’s permission or “approval”. But it still might be the case that it would be a better decision to choose the more selfless option, even if you have a right to be selfish. That seems just as true of traditional medical expenditures as it does of cryonics.
But if while President Bill Clinton knew he was going to be cryopreserved he might have caused the government to devote more resources to artificial intelligence research and existential risks.
A life kept active and productive in the here and now might be more valuable in some respects than one that is dormant for the near future, given that more other individuals exist in the far future who would have to compete with the reanimated individual.
One of the defects of the karma system is that replies to comments tend to get less votes, even when they’re as good as the original comment. Here CronoDAS’s comment is at 9, and the response at only 4, even though the response does a very good job of showing that the cases mentioned are not nearly equivalent.
Would you disagree that the differences mentioned by multifoliaterose are real?
Anyway, in terms of the general point I made, I see the same thing in numerous cases, even when nearly everyone would say the quality of the comments is equal. For example you might see a parent comment at 8 at a response at 2, maybe because people are less interested, or something like that.
Would you disagree that the differences mentioned by multifoliaterose are real?
Yes, I would disagree. A large fraction of the people who are getting heart transplants are old and thus not very productive. More generally, medical expenses in the last three years of life can easily run as much as a hundred thousand US dollars, and often run into the tens of thousands of dollars. Most people in the US and Europe are not at all productive their last year of life.
If I personally were debilitated to the point of not being able to contribute value comparable to the value of a heart transplant then I would prefer to decline the heart transplant and have the money go to a cost-effective charity. I would rather die knowing that I had done something to help others than live knowing that I had been a burden on society. Others may feel differently and that’s fine. We all have our limits. But getting a heart transplant when one is too debilitated to contribute something of comparable value should not be considered philanthropic. Neither should cryonics.
If you missed it, see my comment here. I guess my comment which you responded to was somewhat misleading; I did not intend to claim something about my actual future behavior, rather, I intended simply to make a statement about what I think my future behavior should be.
To put on my Robin Hanson hat, I’d note that you’re acknowledging this level of selflessness to be a Far value and probably not a Near one.
I have strong sympathies toward privileging Far values over Near ones in many of the cases where they conflict in practice, but it doesn’t seem quite accurate to declare that your Far values are your “true” ones and that the Near ones are to be discarded entirely.
So, I think that the right way to conceptualize this is to say that a given person’s values are not fixed but vary with time. I think that at the moment my true values are as I describe. In the course of being tortured, my true values would be very different from the way they are now.
The reason why I generally priviledge Far values over Near values so much is that I value coherence a great deal and I notice that my Near values are very incoherent. But of course if I were being tortured I would have more urgent concerns than coherence.
The Near/Far distinction is about more than just decisions made under duress or temptation. Far values have a strong signaling component, and they’re subject to their own biases.
Can you give an example of a bias which arises from Far values? I should say that I haven’t actually carefully read Hanson’s posts on Near vs. Far modes. In general I think that Hanson’s views of human nature are very misguided (though closer to the truth than is typical).
Okay, thanks for clarifying. I still haven’t read Robin Hanson on Near vs. Far (nor do I have much interest in doing so) but based on your characterization of Far, I would say that I believe that it’s important to strike a balance between Near vs. Far. I don’t really understand what part of my comment orthogonal is/was objecting to—maybe the issue is linguistic/semantic more than anything else.
I see what I say about my values in a neutral state as more representative of my “true values” than what I would say about my values in a state of distress. Yes, if I were actually in need of a heart transplant that would come at the opportunity cost of something of greater social value then I may very well opt for the transplant. But if I could precommit to declining a transplant under such circumstances by pushing a button right now then I would do so.
Similarly, if I were being tortured for a year then if I were given the option to make it stop for a while in exchange for 50 more years of torture later on while being tortured then I might take the option, but I would precommit to not taking such an option if possible.
What you would do has little bearing on what you should do. The above argument doesn’t argue its case. If you are mistaken about your values, of course you can theoretically use those mistaken beliefs to consciously precommit to follow them, no question there.
Maybe tens or thousands, but I’m as ignorant as anybody about the answer, so it’s a question of pulling a best guess, not of accurately estimating the hidden variable.
I don’t understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions). I also don’t understand the point of preaching egoism; how does it help either you personally or everyone else? Finally, 10 and 1000 are both small relative to astronomical waste.
I don’t understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions).
Self-preservation and lots of other self-centered behaviors are real psychological adaptations, which make indifference between self and random other very unlikely, so I draw a tentative lower bound at the factor of 10. Empathy extends fairness to other people, offering them control proportional to what’s available to me and not just what they can get hold of themselves, but it doesn’t suggest equal parts for all, let alone equal to what’s reserved for my own preference. Symmetry arguments live at the more simplistic levels of analysis and don’t apply. What about personal identity? What do you mean by “prescribing the same action” based on cooperation, when the question was about choice of own vs. others’ lives? I don’t see a situation where cooperation would make the factor visibly closer to equal.
I also don’t understand the point of preaching egoism; how does it help either you personally or everyone else?
I’m not “preaching egoism”, I’m being honest about what I believe human preference to be, and any given person’s preference in particular, and so I’m raising an issue with what I believe to be an error about this. Of course, it’s hypothetically in my interest to fool other people into believing they should be as altruistic as possible, in order to benefit from them, but it’s not my game here. Preference is not for grabs.
Finally, 10 and 1000 are both small relative to astronomical waste.
I don’t see this argument. Why is astronomical waste relevant? Preference stems from evolutionary godshatter, so I’d expect something on the order of tribe-sized (taking into account that you are talking about random strangers and not close friends/relatives).
I’m not “preaching egoism”, I’m being honest about what I believe human preference to be, and any given person’s preference in particular, and so I’m raising an issue with what I believe to be an error about this.
There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
I know that people often hold confused explicit beliefs, so that a person holding belief X is only weak evidence about X, especially if I can point to a specific reason why holding belief X would be likely (other than that X is true). Here, we clearly have psychological adaptations that cry altruism. Nothing else is necessary, as long as the reasons I expect X to be false are stronger than the implied evidence of people believing X. And I expect there to be no crazy values (except for the cases of serious neurological conditions, and perhaps not even then).
Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.
It rarely bothers me when insightful original comments are voted up more than their (more or less) equally insightful responses. In my view, the original comment often “deserves” more upvotes for raising an interesting issue in the first place and thereby expanding a fruitful discussion.
The expected value of an act is the sum of the products (utilities x probabilities).
To offset a difference in living 100 times as much longer (even not accounting for other utilities like quality of life), it takes 100 times the probability. I don’t think cryonics is 100 times less likely to work than a heart transplant.
Incidentally, heart transplants and cryonics both cost about the same amount of money… does the “it’s selfish” argument also apply to getting a heart transplant?
Most of multifoliaterose’s criticisms of cryonics apply to the majority of money spent on medical treatments in rich nations.
Getting a heart transplant has instrumental value that cryonics does not.
A heart transplant enables the recipient to continue being a productive member of society. If the recipient is doing a lot to help other people then the cost of the heart transplant is easily outweighed by the recipients’ productivity.
By way of contrast, if society gets to the point where cryopreserved people can be restored, it seems likely that society will have advanced to the point where such people are much less vital to society.
Also, the odds of success for a heart transplant are probably significantly higher than the odds of success for cryorestoration.
Edit: See a remark in a post by Jason Fehr at the GiveWell Mailing List:
I don’t think that having Bill Clinton cryopreserved would be nearly as valuable to society as the cardiovascular operations that he underwent were.
So, then, should prospective heart transplant recipients have to prove that they will do enough with their remaining life to benefit humanity, in order for the operation to be approved?
I think you’re holding cryonics to a much higher standard than other expenditures.
Distinguish personal morality from public enforcement. In a liberal society our personal purchases should (typically) not require anyone else’s permission or “approval”. But it still might be the case that it would be a better decision to choose the more selfless option, even if you have a right to be selfish. That seems just as true of traditional medical expenditures as it does of cryonics.
But if while President Bill Clinton knew he was going to be cryopreserved he might have caused the government to devote more resources to artificial intelligence research and existential risks.
Doesn’t successful cryopreservation and revival have a good chance of doing the same, and for longer?
A life kept active and productive in the here and now might be more valuable in some respects than one that is dormant for the near future, given that more other individuals exist in the far future who would have to compete with the reanimated individual.
One of the defects of the karma system is that replies to comments tend to get less votes, even when they’re as good as the original comment. Here CronoDAS’s comment is at 9, and the response at only 4, even though the response does a very good job of showing that the cases mentioned are not nearly equivalent.
I consider Crono’s comment more insightful than multi’s and my votes reflect my position.
Would you disagree that the differences mentioned by multifoliaterose are real?
Anyway, in terms of the general point I made, I see the same thing in numerous cases, even when nearly everyone would say the quality of the comments is equal. For example you might see a parent comment at 8 at a response at 2, maybe because people are less interested, or something like that.
Yes, I would disagree. A large fraction of the people who are getting heart transplants are old and thus not very productive. More generally, medical expenses in the last three years of life can easily run as much as a hundred thousand US dollars, and often run into the tens of thousands of dollars. Most people in the US and Europe are not at all productive their last year of life.
If I personally were debilitated to the point of not being able to contribute value comparable to the value of a heart transplant then I would prefer to decline the heart transplant and have the money go to a cost-effective charity. I would rather die knowing that I had done something to help others than live knowing that I had been a burden on society. Others may feel differently and that’s fine. We all have our limits. But getting a heart transplant when one is too debilitated to contribute something of comparable value should not be considered philanthropic. Neither should cryonics.
You are making an error by not placing your own well-being into greater regard than well-being of others. It’s a known aspect of human value.
Err, are you saying that his values are wrong, or just that they’re not in line with majoritarian values?
For one thing, multifoliaterose is probably extrapolating from the values xe signals, which aren’t identical to the values xe acts on. I don’t doubt the sincerity of multifoliaterose’s hypothetical resolve (and indeed I share it), but I suspect that I would find reasons to conclude otherwise were I actually in that situation. (Being signed up for cryonics might make me significantly more willing to actually refuse treatment in such a case, though!)
If you missed it, see my comment here. I guess my comment which you responded to was somewhat misleading; I did not intend to claim something about my actual future behavior, rather, I intended simply to make a statement about what I think my future behavior should be.
To put on my Robin Hanson hat, I’d note that you’re acknowledging this level of selflessness to be a Far value and probably not a Near one.
I have strong sympathies toward privileging Far values over Near ones in many of the cases where they conflict in practice, but it doesn’t seem quite accurate to declare that your Far values are your “true” ones and that the Near ones are to be discarded entirely.
So, I think that the right way to conceptualize this is to say that a given person’s values are not fixed but vary with time. I think that at the moment my true values are as I describe. In the course of being tortured, my true values would be very different from the way they are now.
The reason why I generally priviledge Far values over Near values so much is that I value coherence a great deal and I notice that my Near values are very incoherent. But of course if I were being tortured I would have more urgent concerns than coherence.
The Near/Far distinction is about more than just decisions made under duress or temptation. Far values have a strong signaling component, and they’re subject to their own biases.
Can you give an example of a bias which arises from Far values? I should say that I haven’t actually carefully read Hanson’s posts on Near vs. Far modes. In general I think that Hanson’s views of human nature are very misguided (though closer to the truth than is typical).
Willingness to wreck people’s lives (usually but not always other people’s) for the sake of values which may or may not be well thought out.
This is partly a matter of the signaling aspect, and partly because, since Far values are Far, you’re less likely to be accurate about them.
Okay, thanks for clarifying. I still haven’t read Robin Hanson on Near vs. Far (nor do I have much interest in doing so) but based on your characterization of Far, I would say that I believe that it’s important to strike a balance between Near vs. Far. I don’t really understand what part of my comment orthogonal is/was objecting to—maybe the issue is linguistic/semantic more than anything else.
I’m saying that he acts under a mistaken idea about his true values. He should be more selfish (recognize himself as being more selfish).
I see what I say about my values in a neutral state as more representative of my “true values” than what I would say about my values in a state of distress. Yes, if I were actually in need of a heart transplant that would come at the opportunity cost of something of greater social value then I may very well opt for the transplant. But if I could precommit to declining a transplant under such circumstances by pushing a button right now then I would do so.
Similarly, if I were being tortured for a year then if I were given the option to make it stop for a while in exchange for 50 more years of torture later on while being tortured then I might take the option, but I would precommit to not taking such an option if possible.
What you would do has little bearing on what you should do. The above argument doesn’t argue its case. If you are mistaken about your values, of course you can theoretically use those mistaken beliefs to consciously precommit to follow them, no question there.
By what factor? Assume a random stranger.
Maybe tens or thousands, but I’m as ignorant as anybody about the answer, so it’s a question of pulling a best guess, not of accurately estimating the hidden variable.
I don’t understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions). I also don’t understand the point of preaching egoism; how does it help either you personally or everyone else? Finally, 10 and 1000 are both small relative to astronomical waste.
Self-preservation and lots of other self-centered behaviors are real psychological adaptations, which make indifference between self and random other very unlikely, so I draw a tentative lower bound at the factor of 10. Empathy extends fairness to other people, offering them control proportional to what’s available to me and not just what they can get hold of themselves, but it doesn’t suggest equal parts for all, let alone equal to what’s reserved for my own preference. Symmetry arguments live at the more simplistic levels of analysis and don’t apply. What about personal identity? What do you mean by “prescribing the same action” based on cooperation, when the question was about choice of own vs. others’ lives? I don’t see a situation where cooperation would make the factor visibly closer to equal.
I’m not “preaching egoism”, I’m being honest about what I believe human preference to be, and any given person’s preference in particular, and so I’m raising an issue with what I believe to be an error about this. Of course, it’s hypothetically in my interest to fool other people into believing they should be as altruistic as possible, in order to benefit from them, but it’s not my game here. Preference is not for grabs.
I don’t see this argument. Why is astronomical waste relevant? Preference stems from evolutionary godshatter, so I’d expect something on the order of tribe-sized (taking into account that you are talking about random strangers and not close friends/relatives).
There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
Error of judgment. People are crazy.
Yes, but why are you so sure that it’s crazy judgment and not crazy values? How do you know more about their preferences than they do?
I know that people often hold confused explicit beliefs, so that a person holding belief X is only weak evidence about X, especially if I can point to a specific reason why holding belief X would be likely (other than that X is true). Here, we clearly have psychological adaptations that cry altruism. Nothing else is necessary, as long as the reasons I expect X to be false are stronger than the implied evidence of people believing X. And I expect there to be no crazy values (except for the cases of serious neurological conditions, and perhaps not even then).
Are you proposing that evolution has a strong enough effect on human values that we can largely ignore all other influences?
I’m quite dubious of that claim. Different cultures frequently have contradictory mores, and act on them.
Or, from another angle: if values don’t influence behavior, what are they and why do you believe they exist?
Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.
The difference is real. Whether it is also the real reason is another question.
It rarely bothers me when insightful original comments are voted up more than their (more or less) equally insightful responses. In my view, the original comment often “deserves” more upvotes for raising an interesting issue in the first place and thereby expanding a fruitful discussion.
A heart transplant has a much higher expected utility than cryonics. Could that be a major cause of the negative response?
Disagree. A heart transplant that adds a few decades is less valuable than a cryopreservation that adds a few millennia.
Also, heart transplants are a congestion resource whereas cryonics is a scale resource.
So what? The value of winning the lottery is much higher than working for the next five years, but that doesn’t mean it has a higher expected utility.
The expected value of an act is the sum of the products (utilities x probabilities).
Unless you think a heart transplant is just as probable to work as cryonics, then you must consider more than simply the value of each act.
To offset a difference in living 100 times as much longer (even not accounting for other utilities like quality of life), it takes 100 times the probability. I don’t think cryonics is 100 times less likely to work than a heart transplant.