I have several objections to this which I imagine will be standard among anyone who’s read the Sequences, but since they haven’t been stated yet I might as well state them for the record.
Given that, the version of utilitarianism you’ve described is called total utilitarianism. This also seems clearly wrong to me; I think it doesn’t make sense even as an approximation. I don’t think there’s any reason to think that “true utility” is given by a sum over humans any more than it’s given by a sum over human cells. That is, to a first approximation, I think that “true utility” includes lots of complicated interaction terms among humans that aren’t captured by any sum over individual humans alone, in the same way that I think that the “true utility” of an individual human includes lots of complicated interaction terms among their cells (like the interactions among their brain cells making up their mind) that aren’t captured by any sum over individual cells alone.
The point of this is the relation between percieved happiness and some conjured “raw ” happiness. The implications to various ethical systems are just that, implications, and the inclusion of this was not meant as an endorsement. I dont want to argue for utilitarianism here, but I hope we agree some forms of utilitarianism are obviously relevant, and used in practice.
I’m a bit confused by “identifying any function of happiness with utility seems clearly wrong to me” : Do you propose the actual utility function as you understand it, has no relation to happiness at all?
I’m a bit confused by “identifying any function of happiness with utility seems clearly wrong to me” : Do you propose the actual utility function as you understand it, has no relation to happiness at all?
I believe what Qiaochu is saying is that not that happiness isn’t a component of your utility function, but rather that it doesn’t comprise the entirety of your utility function. (Normatively speaking, of course. In practical terms humans don’t even behave as though they have a consistent utility function.)
Thanks. I guess I should not have included the simple utilitarian calculation, as it seems to work as a red herring :( Mea culpa.
Qiaochu: Would the article make better sense if framed like this: …assuming as per standard LessWrong reasoning, the actual utility function is very complicated, but also assuming, it has a large happiness component, whatever hapiness means, we may ask: what would be a relation of such component to usual approaches to measure happiness by asking people? And how to aggregate among people?
I don’t even buy that there is a large happiness component. I would not be surprised to find that in a hundred years we look back on the modern western preoccupation with happiness as mostly a strange cultural phenomenon. The analogous thing looking back on the past might be 11th century monks thinking of something like serving Christ as a large component of “true utility,” or whatever.
I have several objections to this which I imagine will be standard among anyone who’s read the Sequences, but since they haven’t been stated yet I might as well state them for the record.
Identifying any function of happiness with utility seems clearly wrong to me. Humans clearly value lots of things other than happiness. Whatever utility is it shouldn’t be so easy to calculate.
Given that, the version of utilitarianism you’ve described is called total utilitarianism. This also seems clearly wrong to me; I think it doesn’t make sense even as an approximation. I don’t think there’s any reason to think that “true utility” is given by a sum over humans any more than it’s given by a sum over human cells. That is, to a first approximation, I think that “true utility” includes lots of complicated interaction terms among humans that aren’t captured by any sum over individual humans alone, in the same way that I think that the “true utility” of an individual human includes lots of complicated interaction terms among their cells (like the interactions among their brain cells making up their mind) that aren’t captured by any sum over individual cells alone.
The point of this is the relation between percieved happiness and some conjured “raw ” happiness. The implications to various ethical systems are just that, implications, and the inclusion of this was not meant as an endorsement. I dont want to argue for utilitarianism here, but I hope we agree some forms of utilitarianism are obviously relevant, and used in practice.
I’m a bit confused by “identifying any function of happiness with utility seems clearly wrong to me” : Do you propose the actual utility function as you understand it, has no relation to happiness at all?
I believe what Qiaochu is saying is that not that happiness isn’t a component of your utility function, but rather that it doesn’t comprise the entirety of your utility function. (Normatively speaking, of course. In practical terms humans don’t even behave as though they have a consistent utility function.)
Thanks. I guess I should not have included the simple utilitarian calculation, as it seems to work as a red herring :( Mea culpa.
Qiaochu: Would the article make better sense if framed like this: …assuming as per standard LessWrong reasoning, the actual utility function is very complicated, but also assuming, it has a large happiness component, whatever hapiness means, we may ask: what would be a relation of such component to usual approaches to measure happiness by asking people? And how to aggregate among people?
I don’t even buy that there is a large happiness component. I would not be surprised to find that in a hundred years we look back on the modern western preoccupation with happiness as mostly a strange cultural phenomenon. The analogous thing looking back on the past might be 11th century monks thinking of something like serving Christ as a large component of “true utility,” or whatever.
(But yes, I would be happier with this framing.)