The objective account, I think, is more moral antirealist. It says, “The world contains only paperclips and happy humans, never utilities! The world contains only paperclip-maximisers and happy-human-maximisers, never utility-maximisers!”
Re the issue of the subjective account causing us to focus too much on moral realism and the idea that values are out there in the world, a few responses.
Moral Realism’s problem is not exactly that there is no correct morality, but rather that there are far too many correct answers, such that you can pick and choose which morality you will follow, so there’s no constraint on the agent’s values unless provided by another source.
In one sense, values are out there in the world, since they are in the computer/brain, which is part of the world, whether they be computed by utility/reward functions or something else.
Utility is a variable like the number n is a variable, so we can always say that paperclips and happy humans are concrete instantiations of a utility, similar to how 47 is a concrete number, while the abstract concept of a number is n, which represents an arbitrary number.
Thus, whenever we say that something is a utility maximizer, we don’t say about specific utility they are maximizing, but we can concretize the utility function by say that they are a paperclip maximizer, and say much more about that.
I do think there’s too much focus on utility functions as a general class, but that’s due to practical concerns like it being hard to prove what it’s going to do, and people often assume on LW that their results go further than they do without naming what specific utility function it is maximizing.
Re the issue of the subjective account causing us to focus too much on moral realism and the idea that values are out there in the world, a few responses.
Moral Realism’s problem is not exactly that there is no correct morality, but rather that there are far too many correct answers, such that you can pick and choose which morality you will follow, so there’s no constraint on the agent’s values unless provided by another source.
In one sense, values are out there in the world, since they are in the computer/brain, which is part of the world, whether they be computed by utility/reward functions or something else.
Utility is a variable like the number n is a variable, so we can always say that paperclips and happy humans are concrete instantiations of a utility, similar to how 47 is a concrete number, while the abstract concept of a number is n, which represents an arbitrary number.
Thus, whenever we say that something is a utility maximizer, we don’t say about specific utility they are maximizing, but we can concretize the utility function by say that they are a paperclip maximizer, and say much more about that.
I do think there’s too much focus on utility functions as a general class, but that’s due to practical concerns like it being hard to prove what it’s going to do, and people often assume on LW that their results go further than they do without naming what specific utility function it is maximizing.