I am saying that a CEV that extrapolated human morality would generally be utilitarian, but that it would grant a utility value of zero to satisfying what I call “malicious preferences.”
This is because I think that a CEV of human morality would find the concept of malicious preferences to be immoral and discard or suppress it.
Zero is a strange number to have specified there, but then I don’t know the shape of the function you’re describing. I would have expected a non-specific “negative utility” in its place.
Zero is a strange number to have specified there, but then I don’t know the shape of the function you’re describing. I would have expected a non-specific “negative utility” in its place.
You’re probably right, I was typing fairly quickly last night.
Zero is a strange number to have specified there, but then I don’t know the shape of the function you’re describing. I would have expected a non-specific “negative utility” in its place.
You’re probably right, I was typing fairly quickly last night.