People can intentionally maximize anything, including the number of paperclips in the universe. Suppose there was a religion or school of philosophy that taught that maximizing paperclips is deontologically the right thing to do - not because it’s good for anyone, or because Divine Clippy would smite them for not doing it, just that morality demands that they do it. And so they choose to do it, even if they hate it.
In that case, I would say their true utility function was “follow the deontological rules” or “avoid being smited by divine clippy”, and that maximising paperclips is an instrumental subgoal.
In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.
If you define “utility function” as “what agents maximize” then your above statement is true but tautological. If you define “utility function” as “an agent’s relation between states of the world and that agent’s hedons” then it’s not true that you can only maximize your utility function.
I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don’t see why a ‘happiness function’ would be even slightly interesting to decision theorists.
I think I’d want to define a utility function as “what an agent wants to maximise” but I’m not entirely clear how to unpack the word ‘want’ in that sentence, I will admit I’m somewhat confused.
However, I’m not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.
No. People can be stupid. They can even be wrong about what their utility function is.
It is “that which you would maximise if you weren’t a dumbass and knew what you wanted’.
Good point.
Perhaps I should have said “its impossible to intentionally maximise anything other than your utility function”.
People can intentionally maximize anything, including the number of paperclips in the universe. Suppose there was a religion or school of philosophy that taught that maximizing paperclips is deontologically the right thing to do - not because it’s good for anyone, or because Divine Clippy would smite them for not doing it, just that morality demands that they do it. And so they choose to do it, even if they hate it.
In that case, I would say their true utility function was “follow the deontological rules” or “avoid being smited by divine clippy”, and that maximising paperclips is an instrumental subgoal.
In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.
If you define “utility function” as “what agents maximize” then your above statement is true but tautological. If you define “utility function” as “an agent’s relation between states of the world and that agent’s hedons” then it’s not true that you can only maximize your utility function.
I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don’t see why a ‘happiness function’ would be even slightly interesting to decision theorists.
I think I’d want to define a utility function as “what an agent wants to maximise” but I’m not entirely clear how to unpack the word ‘want’ in that sentence, I will admit I’m somewhat confused.
However, I’m not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.