It could just be that a world with additional happy people is better according to my utility function, just like a world with fewer painlessly killed people per unit of time is better according to my utility function. While I agree that goodness should be “goodness for someone” in the sense that my utility function should be something like a function only of the mental states of all moral patients (at all times, etc.), I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment. One world can be better (according to my utility function) than another because of some aggregation of the well-beings of all moral patients within it being larger. I think most people have such utility functions. Without allowing for something like this, I can’t really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
… I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment.
Not quite—but I would say that it is not possible to describe one world as “better” than another in any quantifiable or reducible way (as distinct from “better, according to my irreducible and arbitrary judgment”—to which you are, of course, entitled), unless the two worlds contain the same people (which, please note, is only a necessary, not a sufficient, criterion).
I do not believe that aggregation of well-being across individuals is possible or coherent.
(Incidentally, I am also fairly sure that most people don’t have utility functions, period, but I imagine that your use of the term was metaphorical, and in practice should be read merely as “preferences” or something similar.)
Without allowing for something like this, I can’t really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
Come now, this is not a sensible model of how we make decisions. If I must choose between (a) stealing my mother’s jewelry in order to buy drugs and (b) giving a homeless person a sandwich, there are all sorts of ethical considerations we may bring to bear on this question, but “choosing between futures with very different sets of moral patients” is simply irrelevant to the question. If your decision procedure in a case like this involves the consideration of far-future outcomes, requires the construction of utility aggregation procedures across large numbers of people, etc., etc., then your ethical framework is of no value and is almost certainly nonsense.
It could just be that a world with additional happy people is better according to my utility function, just like a world with fewer painlessly killed people per unit of time is better according to my utility function. While I agree that goodness should be “goodness for someone” in the sense that my utility function should be something like a function only of the mental states of all moral patients (at all times, etc.), I disagree with the claim that the same people have to exist in two possible worlds for me to be able to say which is better, which is what you seem to be implying in your comment. One world can be better (according to my utility function) than another because of some aggregation of the well-beings of all moral patients within it being larger. I think most people have such utility functions. Without allowing for something like this, I can’t really see a way to construct an ethical model that tells essentially anything interesting about any decisions at all (at least for people who care about other people), as all decisions probably involve choosing between futures with very different sets of moral patients.
Not quite—but I would say that it is not possible to describe one world as “better” than another in any quantifiable or reducible way (as distinct from “better, according to my irreducible and arbitrary judgment”—to which you are, of course, entitled), unless the two worlds contain the same people (which, please note, is only a necessary, not a sufficient, criterion).
I do not believe that aggregation of well-being across individuals is possible or coherent.
(Incidentally, I am also fairly sure that most people don’t have utility functions, period, but I imagine that your use of the term was metaphorical, and in practice should be read merely as “preferences” or something similar.)
Come now, this is not a sensible model of how we make decisions. If I must choose between (a) stealing my mother’s jewelry in order to buy drugs and (b) giving a homeless person a sandwich, there are all sorts of ethical considerations we may bring to bear on this question, but “choosing between futures with very different sets of moral patients” is simply irrelevant to the question. If your decision procedure in a case like this involves the consideration of far-future outcomes, requires the construction of utility aggregation procedures across large numbers of people, etc., etc., then your ethical framework is of no value and is almost certainly nonsense.