So utility theory is a useful tool, but as far as I understand it’s not directly used as a source of moral guidance (although I assume once you have some other source you can use utility theory to maximize it). Whereas utilitarianism as a metaethics school is concerned exactly with that, and you can hear people in EA talking about “maximizing utility” as the end in and of itself all the time. It was in this latter sense that I was asking.
Perhaps for most they don’t have this in the back of their mind when they think of utility. But, for me this is what I’m thinking about. The aggregation is still confusing to me, but as a simple case example. If I want to maximise total utility and am in a situation that only impacts a single entity then increasing utility is the same to me as getting this entity in for them more preferable states.
Having read some of your other comments. I expect you to ask if the top preference of a thermostat is it’s goal temperature? And to this I have no good answer.
For things like a thermostat and a toy robot you can obviously see that there is a behavioral objective which we could use to infer preferences. But, is the reason that thermostats are not included in utility calculations that behavioral objective does not actually map to a preference ordering or that their weight when aggregated is 0.
So utility theory is a useful tool, but as far as I understand it’s not directly used as a source of moral guidance (although I assume once you have some other source you can use utility theory to maximize it). Whereas utilitarianism as a metaethics school is concerned exactly with that, and you can hear people in EA talking about “maximizing utility” as the end in and of itself all the time. It was in this latter sense that I was asking.
Perhaps for most they don’t have this in the back of their mind when they think of utility. But, for me this is what I’m thinking about. The aggregation is still confusing to me, but as a simple case example. If I want to maximise total utility and am in a situation that only impacts a single entity then increasing utility is the same to me as getting this entity in for them more preferable states.
Having read some of your other comments. I expect you to ask if the top preference of a thermostat is it’s goal temperature? And to this I have no good answer.
For things like a thermostat and a toy robot you can obviously see that there is a behavioral objective which we could use to infer preferences. But, is the reason that thermostats are not included in utility calculations that behavioral objective does not actually map to a preference ordering or that their weight when aggregated is 0.