Would you disagree that the differences mentioned by multifoliaterose are real?
Yes, I would disagree. A large fraction of the people who are getting heart transplants are old and thus not very productive. More generally, medical expenses in the last three years of life can easily run as much as a hundred thousand US dollars, and often run into the tens of thousands of dollars. Most people in the US and Europe are not at all productive their last year of life.
If I personally were debilitated to the point of not being able to contribute value comparable to the value of a heart transplant then I would prefer to decline the heart transplant and have the money go to a cost-effective charity. I would rather die knowing that I had done something to help others than live knowing that I had been a burden on society. Others may feel differently and that’s fine. We all have our limits. But getting a heart transplant when one is too debilitated to contribute something of comparable value should not be considered philanthropic. Neither should cryonics.
If you missed it, see my comment here. I guess my comment which you responded to was somewhat misleading; I did not intend to claim something about my actual future behavior, rather, I intended simply to make a statement about what I think my future behavior should be.
To put on my Robin Hanson hat, I’d note that you’re acknowledging this level of selflessness to be a Far value and probably not a Near one.
I have strong sympathies toward privileging Far values over Near ones in many of the cases where they conflict in practice, but it doesn’t seem quite accurate to declare that your Far values are your “true” ones and that the Near ones are to be discarded entirely.
So, I think that the right way to conceptualize this is to say that a given person’s values are not fixed but vary with time. I think that at the moment my true values are as I describe. In the course of being tortured, my true values would be very different from the way they are now.
The reason why I generally priviledge Far values over Near values so much is that I value coherence a great deal and I notice that my Near values are very incoherent. But of course if I were being tortured I would have more urgent concerns than coherence.
The Near/Far distinction is about more than just decisions made under duress or temptation. Far values have a strong signaling component, and they’re subject to their own biases.
Can you give an example of a bias which arises from Far values? I should say that I haven’t actually carefully read Hanson’s posts on Near vs. Far modes. In general I think that Hanson’s views of human nature are very misguided (though closer to the truth than is typical).
Okay, thanks for clarifying. I still haven’t read Robin Hanson on Near vs. Far (nor do I have much interest in doing so) but based on your characterization of Far, I would say that I believe that it’s important to strike a balance between Near vs. Far. I don’t really understand what part of my comment orthogonal is/was objecting to—maybe the issue is linguistic/semantic more than anything else.
I see what I say about my values in a neutral state as more representative of my “true values” than what I would say about my values in a state of distress. Yes, if I were actually in need of a heart transplant that would come at the opportunity cost of something of greater social value then I may very well opt for the transplant. But if I could precommit to declining a transplant under such circumstances by pushing a button right now then I would do so.
Similarly, if I were being tortured for a year then if I were given the option to make it stop for a while in exchange for 50 more years of torture later on while being tortured then I might take the option, but I would precommit to not taking such an option if possible.
What you would do has little bearing on what you should do. The above argument doesn’t argue its case. If you are mistaken about your values, of course you can theoretically use those mistaken beliefs to consciously precommit to follow them, no question there.
Maybe tens or thousands, but I’m as ignorant as anybody about the answer, so it’s a question of pulling a best guess, not of accurately estimating the hidden variable.
I don’t understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions). I also don’t understand the point of preaching egoism; how does it help either you personally or everyone else? Finally, 10 and 1000 are both small relative to astronomical waste.
I don’t understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions).
Self-preservation and lots of other self-centered behaviors are real psychological adaptations, which make indifference between self and random other very unlikely, so I draw a tentative lower bound at the factor of 10. Empathy extends fairness to other people, offering them control proportional to what’s available to me and not just what they can get hold of themselves, but it doesn’t suggest equal parts for all, let alone equal to what’s reserved for my own preference. Symmetry arguments live at the more simplistic levels of analysis and don’t apply. What about personal identity? What do you mean by “prescribing the same action” based on cooperation, when the question was about choice of own vs. others’ lives? I don’t see a situation where cooperation would make the factor visibly closer to equal.
I also don’t understand the point of preaching egoism; how does it help either you personally or everyone else?
I’m not “preaching egoism”, I’m being honest about what I believe human preference to be, and any given person’s preference in particular, and so I’m raising an issue with what I believe to be an error about this. Of course, it’s hypothetically in my interest to fool other people into believing they should be as altruistic as possible, in order to benefit from them, but it’s not my game here. Preference is not for grabs.
Finally, 10 and 1000 are both small relative to astronomical waste.
I don’t see this argument. Why is astronomical waste relevant? Preference stems from evolutionary godshatter, so I’d expect something on the order of tribe-sized (taking into account that you are talking about random strangers and not close friends/relatives).
I’m not “preaching egoism”, I’m being honest about what I believe human preference to be, and any given person’s preference in particular, and so I’m raising an issue with what I believe to be an error about this.
There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
I know that people often hold confused explicit beliefs, so that a person holding belief X is only weak evidence about X, especially if I can point to a specific reason why holding belief X would be likely (other than that X is true). Here, we clearly have psychological adaptations that cry altruism. Nothing else is necessary, as long as the reasons I expect X to be false are stronger than the implied evidence of people believing X. And I expect there to be no crazy values (except for the cases of serious neurological conditions, and perhaps not even then).
Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.
Yes, I would disagree. A large fraction of the people who are getting heart transplants are old and thus not very productive. More generally, medical expenses in the last three years of life can easily run as much as a hundred thousand US dollars, and often run into the tens of thousands of dollars. Most people in the US and Europe are not at all productive their last year of life.
If I personally were debilitated to the point of not being able to contribute value comparable to the value of a heart transplant then I would prefer to decline the heart transplant and have the money go to a cost-effective charity. I would rather die knowing that I had done something to help others than live knowing that I had been a burden on society. Others may feel differently and that’s fine. We all have our limits. But getting a heart transplant when one is too debilitated to contribute something of comparable value should not be considered philanthropic. Neither should cryonics.
You are making an error by not placing your own well-being into greater regard than well-being of others. It’s a known aspect of human value.
Err, are you saying that his values are wrong, or just that they’re not in line with majoritarian values?
For one thing, multifoliaterose is probably extrapolating from the values xe signals, which aren’t identical to the values xe acts on. I don’t doubt the sincerity of multifoliaterose’s hypothetical resolve (and indeed I share it), but I suspect that I would find reasons to conclude otherwise were I actually in that situation. (Being signed up for cryonics might make me significantly more willing to actually refuse treatment in such a case, though!)
If you missed it, see my comment here. I guess my comment which you responded to was somewhat misleading; I did not intend to claim something about my actual future behavior, rather, I intended simply to make a statement about what I think my future behavior should be.
To put on my Robin Hanson hat, I’d note that you’re acknowledging this level of selflessness to be a Far value and probably not a Near one.
I have strong sympathies toward privileging Far values over Near ones in many of the cases where they conflict in practice, but it doesn’t seem quite accurate to declare that your Far values are your “true” ones and that the Near ones are to be discarded entirely.
So, I think that the right way to conceptualize this is to say that a given person’s values are not fixed but vary with time. I think that at the moment my true values are as I describe. In the course of being tortured, my true values would be very different from the way they are now.
The reason why I generally priviledge Far values over Near values so much is that I value coherence a great deal and I notice that my Near values are very incoherent. But of course if I were being tortured I would have more urgent concerns than coherence.
The Near/Far distinction is about more than just decisions made under duress or temptation. Far values have a strong signaling component, and they’re subject to their own biases.
Can you give an example of a bias which arises from Far values? I should say that I haven’t actually carefully read Hanson’s posts on Near vs. Far modes. In general I think that Hanson’s views of human nature are very misguided (though closer to the truth than is typical).
Willingness to wreck people’s lives (usually but not always other people’s) for the sake of values which may or may not be well thought out.
This is partly a matter of the signaling aspect, and partly because, since Far values are Far, you’re less likely to be accurate about them.
Okay, thanks for clarifying. I still haven’t read Robin Hanson on Near vs. Far (nor do I have much interest in doing so) but based on your characterization of Far, I would say that I believe that it’s important to strike a balance between Near vs. Far. I don’t really understand what part of my comment orthogonal is/was objecting to—maybe the issue is linguistic/semantic more than anything else.
I’m saying that he acts under a mistaken idea about his true values. He should be more selfish (recognize himself as being more selfish).
I see what I say about my values in a neutral state as more representative of my “true values” than what I would say about my values in a state of distress. Yes, if I were actually in need of a heart transplant that would come at the opportunity cost of something of greater social value then I may very well opt for the transplant. But if I could precommit to declining a transplant under such circumstances by pushing a button right now then I would do so.
Similarly, if I were being tortured for a year then if I were given the option to make it stop for a while in exchange for 50 more years of torture later on while being tortured then I might take the option, but I would precommit to not taking such an option if possible.
What you would do has little bearing on what you should do. The above argument doesn’t argue its case. If you are mistaken about your values, of course you can theoretically use those mistaken beliefs to consciously precommit to follow them, no question there.
By what factor? Assume a random stranger.
Maybe tens or thousands, but I’m as ignorant as anybody about the answer, so it’s a question of pulling a best guess, not of accurately estimating the hidden variable.
I don’t understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions). I also don’t understand the point of preaching egoism; how does it help either you personally or everyone else? Finally, 10 and 1000 are both small relative to astronomical waste.
Self-preservation and lots of other self-centered behaviors are real psychological adaptations, which make indifference between self and random other very unlikely, so I draw a tentative lower bound at the factor of 10. Empathy extends fairness to other people, offering them control proportional to what’s available to me and not just what they can get hold of themselves, but it doesn’t suggest equal parts for all, let alone equal to what’s reserved for my own preference. Symmetry arguments live at the more simplistic levels of analysis and don’t apply. What about personal identity? What do you mean by “prescribing the same action” based on cooperation, when the question was about choice of own vs. others’ lives? I don’t see a situation where cooperation would make the factor visibly closer to equal.
I’m not “preaching egoism”, I’m being honest about what I believe human preference to be, and any given person’s preference in particular, and so I’m raising an issue with what I believe to be an error about this. Of course, it’s hypothetically in my interest to fool other people into believing they should be as altruistic as possible, in order to benefit from them, but it’s not my game here. Preference is not for grabs.
I don’t see this argument. Why is astronomical waste relevant? Preference stems from evolutionary godshatter, so I’d expect something on the order of tribe-sized (taking into account that you are talking about random strangers and not close friends/relatives).
There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
Error of judgment. People are crazy.
Yes, but why are you so sure that it’s crazy judgment and not crazy values? How do you know more about their preferences than they do?
I know that people often hold confused explicit beliefs, so that a person holding belief X is only weak evidence about X, especially if I can point to a specific reason why holding belief X would be likely (other than that X is true). Here, we clearly have psychological adaptations that cry altruism. Nothing else is necessary, as long as the reasons I expect X to be false are stronger than the implied evidence of people believing X. And I expect there to be no crazy values (except for the cases of serious neurological conditions, and perhaps not even then).
Are you proposing that evolution has a strong enough effect on human values that we can largely ignore all other influences?
I’m quite dubious of that claim. Different cultures frequently have contradictory mores, and act on them.
Or, from another angle: if values don’t influence behavior, what are they and why do you believe they exist?
Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.