Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.
Huh. I wouldn’t expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don’t just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.
Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don’t think that surface features, such as explicit beliefs, adequately reflect its nature.
Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.