My intended solution was that, if you check the utility of your constituents from creating more people, you’re explicitly not taking the utility of the new people into account. I’ll add a few sentences at the end of the article to try to clarify this.
Another thing I can say is that, if you assume that everyone’s utility is zero at the decision point, it’s not clear why you would see a utility gain from adding more people.
Isn’t this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn’t this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?
I suppose you could say that it’s equivalent to “total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function”.
(Under mere “total utilitarianism that only takes into account the utility of already extant people”, the government could wirehead its constituency.)
Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of people will be inconsistent over time (as preferences evolve, new people join, and old people leave) and any decision-making framework needs to be able to handle that inconsistency intelligently. Failure to handle that inconsistency intelligently is what leads to the Repugnant Conclusions.
My intended solution was that, if you check the utility of your constituents from creating more people, you’re explicitly not taking the utility of the new people into account. I’ll add a few sentences at the end of the article to try to clarify this.
Another thing I can say is that, if you assume that everyone’s utility is zero at the decision point, it’s not clear why you would see a utility gain from adding more people.
Isn’t this equivalent to total utilitarianism that only takes into account the utility of already extant people? Also, isn’t this inconsistent over time (someone who used this as their ethical framework could predict specific discontinuities in their future values)?
I suppose you could say that it’s equivalent to “total utilitarianism that only takes into account the utility of already extant people, and only takes into account their current utility function [at the time the decision is made] and not their future utility function”.
(Under mere “total utilitarianism that only takes into account the utility of already extant people”, the government could wirehead its constituency.)
Yes, this is explicitly inconsistent over time. I actually would argue that the utility function for any group of people will be inconsistent over time (as preferences evolve, new people join, and old people leave) and any decision-making framework needs to be able to handle that inconsistency intelligently. Failure to handle that inconsistency intelligently is what leads to the Repugnant Conclusions.