I think there is a confusion about what is meant by utilitarianism and utility.
Consider the moral principle that moral value is local to the individual, in the sense that there is some function F: Individual minds → Real numbers such that the total utility of the universe. Alice having an enjoyable life is good, and the amount of goodness doesn’t depend on Bob. This is a real restriction on the space of utility functions. It says that you should be indifferent between (A coin toss between both Alice and Bob existing and neither Alice or Bob existing) and (A coin toss between Alice existing and Bob existing). At least on the condition that Alice’s and Bob’s quality of life is the same if they do exist in either scenario. And no one else is effected by Alice’s or Bob’s existence.
Under this principle, a utopia of a million times the size is a million times as good.
I agree that wanting to create new lives regardless of living conditions is wrong. There is a general assumption that the lives will be worth living. In friendly superintelligence singleton scenarios, this becomes massively overdetermined.
Utility is not some ethereal substance that exists in humans and animals, with it being conceivable animals contain more.
It is possible there is some animal or artificial mind such that if we truly understood the neurology, we would prefer to fill the universe with them.
Often “humans” is short for “beings similar to current humans except for this wish list of improvements”. (Posthumans) There is some disagreement over how radical these upgrades should be.
I think there is a confusion about what is meant by utilitarianism and utility.
Consider the moral principle that moral value is local to the individual, in the sense that there is some function F: Individual minds → Real numbers such that the total utility of the universe. Alice having an enjoyable life is good, and the amount of goodness doesn’t depend on Bob. This is a real restriction on the space of utility functions. It says that you should be indifferent between (A coin toss between both Alice and Bob existing and neither Alice or Bob existing) and (A coin toss between Alice existing and Bob existing). At least on the condition that Alice’s and Bob’s quality of life is the same if they do exist in either scenario. And no one else is effected by Alice’s or Bob’s existence.
Under this principle, a utopia of a million times the size is a million times as good.
I agree that wanting to create new lives regardless of living conditions is wrong. There is a general assumption that the lives will be worth living. In friendly superintelligence singleton scenarios, this becomes massively overdetermined.
Utility is not some ethereal substance that exists in humans and animals, with it being conceivable animals contain more.
It is possible there is some animal or artificial mind such that if we truly understood the neurology, we would prefer to fill the universe with them.
Often “humans” is short for “beings similar to current humans except for this wish list of improvements”. (Posthumans) There is some disagreement over how radical these upgrades should be.