To add to this, if, for the sake of argument, there was a formalization of “utility” from utilitarianism, that’d imply having a function over a region of space (or spacetime), which finds how this region feels, or what it wants. (For actually implementing an AI with it, that function would have to be somehow approximated on the actual ontology we employ, which we don’t know how to do either, but I digress).
Naturally, there’s no reason for this function taken over large region of space (including the whole earth) to be equal to sum or average or other linear combination of this function taken over parts of that region. Indeed that very obviously wouldn’t work if the region was your head and the sub-regions were 1nm^3 cubes.
To add to this, if, for the sake of argument, there was a formalization of “utility” from utilitarianism, that’d imply having a function over a region of space (or spacetime), which finds how this region feels, or what it wants. (For actually implementing an AI with it, that function would have to be somehow approximated on the actual ontology we employ, which we don’t know how to do either, but I digress).
Naturally, there’s no reason for this function taken over large region of space (including the whole earth) to be equal to sum or average or other linear combination of this function taken over parts of that region. Indeed that very obviously wouldn’t work if the region was your head and the sub-regions were 1nm^3 cubes.