I’m not “preaching egoism”, I’m being honest about what I believe human preference to be, and any given person’s preference in particular, and so I’m raising an issue with what I believe to be an error about this.
There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
I know that people often hold confused explicit beliefs, so that a person holding belief X is only weak evidence about X, especially if I can point to a specific reason why holding belief X would be likely (other than that X is true). Here, we clearly have psychological adaptations that cry altruism. Nothing else is necessary, as long as the reasons I expect X to be false are stronger than the implied evidence of people believing X. And I expect there to be no crazy values (except for the cases of serious neurological conditions, and perhaps not even then).
Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.
There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations.
There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don’t see how your theory accounts for their behavior.
Error of judgment. People are crazy.
Yes, but why are you so sure that it’s crazy judgment and not crazy values? How do you know more about their preferences than they do?
I know that people often hold confused explicit beliefs, so that a person holding belief X is only weak evidence about X, especially if I can point to a specific reason why holding belief X would be likely (other than that X is true). Here, we clearly have psychological adaptations that cry altruism. Nothing else is necessary, as long as the reasons I expect X to be false are stronger than the implied evidence of people believing X. And I expect there to be no crazy values (except for the cases of serious neurological conditions, and perhaps not even then).
Are you proposing that evolution has a strong enough effect on human values that we can largely ignore all other influences?
I’m quite dubious of that claim. Different cultures frequently have contradictory mores, and act on them.
Or, from another angle: if values don’t influence behavior, what are they and why do you believe they exist?
Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.