Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.
Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one’s estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.
(I’m only sketching here what amounts to my still confused informal understanding of the topic.)
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.