I may be misunderstanding here, but I think there’s a distinction you’re failing to make:
Max expected utility over possible future states (only one of which turns out to be real, so I guess max utility over expected future properties of the amplitude field over configuration space, rather than properties over individual configurations, if one want’s to get nitpicky...), while average/total/whatever utilitarianism has to do with how you deal with summing the good experienced/recieved among people that would exist in the various modeled states.
I’m looking at that distinction and un-making it. I don’t see how you can choose not to average utility within an outcome, yet choose to average utility over possible future states.
Oh, okay then. The version of the post that I read seemed to be more failing to notice it rather than trying to explicitly deal with it head on. Anyways, I’d still say there’s a distinction that makes it not quite obvious that one implies the other.
Anyways, the whole maximize expected utility over future states (rather than future selves, I guess) comes straight out of the various theorems used to derive decision theory. Via the vulnerability arguments, etc, it’s basically a “how not to be stupid, no matter what your values are” thing.
The average vs total utilitarianism thing would more be a moral position, a property of one’s utility function itself. So that would have to come from somehow appealing to the various bits of us that process moral reasoning. At least in part. First, it requires an assumption of equal (in some form) inherent value of humans, in some sense. (Though now that I think about this, that condition may be weakenable)
Next, one has to basically, well, ultimately figure out whether maximizing average or total good across every person is preferable. Max good can produces oddities like the repugnant conclusion, of course.
Things like an appeal to, say, a sense of fairness would be an example of an argument for average utilitarianism.
An argument against would be, say, something like average utilitarianism would seem to imply that the inherent value of a person, that is, how important it is when something good or bad happens to them, would seem to decrease as population increases. This rubs my moral sense the wrong way.
(note, it’s not currently obvious to me which is the Right Way, so am not trying to push either one. I’m merely giving examples of what kinds of arguments would, I think, be relevant here. ie, stuff that appeals to our moral senses or implications theirof rather than trying to make the internal structure of one’s utility function correspond to the structure of decision theory itself. While utility functions like that may, in a certain sense, have a mathematical elegance, it’s not really the type of argument that I’d think is at all relevant.)
I may be misunderstanding here, but I think there’s a distinction you’re failing to make:
Max expected utility over possible future states (only one of which turns out to be real, so I guess max utility over expected future properties of the amplitude field over configuration space, rather than properties over individual configurations, if one want’s to get nitpicky...), while average/total/whatever utilitarianism has to do with how you deal with summing the good experienced/recieved among people that would exist in the various modeled states.
At least that’s my understanding.
I’m looking at that distinction and un-making it. I don’t see how you can choose not to average utility within an outcome, yet choose to average utility over possible future states.
Oh, okay then. The version of the post that I read seemed to be more failing to notice it rather than trying to explicitly deal with it head on. Anyways, I’d still say there’s a distinction that makes it not quite obvious that one implies the other.
Anyways, the whole maximize expected utility over future states (rather than future selves, I guess) comes straight out of the various theorems used to derive decision theory. Via the vulnerability arguments, etc, it’s basically a “how not to be stupid, no matter what your values are” thing.
The average vs total utilitarianism thing would more be a moral position, a property of one’s utility function itself. So that would have to come from somehow appealing to the various bits of us that process moral reasoning. At least in part. First, it requires an assumption of equal (in some form) inherent value of humans, in some sense. (Though now that I think about this, that condition may be weakenable)
Next, one has to basically, well, ultimately figure out whether maximizing average or total good across every person is preferable. Max good can produces oddities like the repugnant conclusion, of course.
Things like an appeal to, say, a sense of fairness would be an example of an argument for average utilitarianism.
An argument against would be, say, something like average utilitarianism would seem to imply that the inherent value of a person, that is, how important it is when something good or bad happens to them, would seem to decrease as population increases. This rubs my moral sense the wrong way.
(note, it’s not currently obvious to me which is the Right Way, so am not trying to push either one. I’m merely giving examples of what kinds of arguments would, I think, be relevant here. ie, stuff that appeals to our moral senses or implications theirof rather than trying to make the internal structure of one’s utility function correspond to the structure of decision theory itself. While utility functions like that may, in a certain sense, have a mathematical elegance, it’s not really the type of argument that I’d think is at all relevant.)