How about the appropriate utility function being maximizing integral of all humans who have ever lived?
Any utilitarianism is subject to a repugnant conclusion of some form. In this case mandatory insemination of all fertile females and reducing subsistence to the minimum acceptable caloric levels would likely yield higher “integral of all humans who have ever lived” than famine relief.
But, in any case, I would expect this to lead to the Malthusian scenario we should be trying to avoid, not an overall maximization of all humans who have ever lived.
What if the reason repugnant conclusions come up is that we only have estimates of our real utility functions which are an adequate fit over most of the parameter space but diverge from true utility under certain boundary conditions?
If so, either don’t feel shame about having to patch your utility function with one that does fit better in that region of the parameter space… or aim for a non-maximal but closer to maximum utility that that is far enough from the boundary that it can be relied on.
adequate over most of the parameter space but diverge from true utility under some boundary conditions
Right, this is the issue Eliezer discussed at length in various posts, The Hidden Complexity of Wishes is one of the most illuminating. A utility-maximizing genie will always find a cheap way to maximize a given utility by picking one of the “boundaries” the wisher never thought of. And given that there are no obvious boundaries to begin with and so to stay “far enough from”, it would simply pick a convenient point in the parameter space.
How about this as a rule of thumb, pending something more formal:
If a particular reallocation of resources/priority/etc. seems optimal, look for a point in the solution space between there and the status quo that is more optimal than the status quo, go for that point, and re-evaluate from there.
So, looking at shminux’ post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?
If your seemingly-optimal point is repugnant why would you want to go in that direction anyway?
So, looking at shminux’ post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?
I believe that deliberately increasing population growth is specifically the opposite direction of the one we should be aiming toward if we are to maximize any utility function that penalizes die-offs, at least as long as we are strictly confined to one planet. I was just more interested in the more general point shminux raised about repugnant conclusions and wanted to address that instead of the specifics of this particular repugnant conclusion.
I think the way to maximize the “human integral” is to find the rate of population change at which our chances surviving long enough and ramping up our technological capabilities fast enough to colonize the solar system. That, in turn, will be bounded from above by population growth rates that risk overshoot, civilizational collapse, and die-off and bounded from below by the critical mass necessary for optimum technological progress and the minimum viable population. My guess is that the first of these is the more proximal one.
At any rate, we have to have some better-than-nothing way of handling repugnant conclusions that doesn’t amount to doing nothing and waiting for someone else to come up with all the answers. I also think it’s important to distinguish between optima that are inherently repugnant versus optima that can be non-repugnant but we haven’t been able to think of a non-repugnant path to get from here to there.
Any utilitarianism is subject to a repugnant conclusion of some form. In this case mandatory insemination of all fertile females and reducing subsistence to the minimum acceptable caloric levels would likely yield higher “integral of all humans who have ever lived” than famine relief.
But, in any case, I would expect this to lead to the Malthusian scenario we should be trying to avoid, not an overall maximization of all humans who have ever lived.
What if the reason repugnant conclusions come up is that we only have estimates of our real utility functions which are an adequate fit over most of the parameter space but diverge from true utility under certain boundary conditions?
If so, either don’t feel shame about having to patch your utility function with one that does fit better in that region of the parameter space… or aim for a non-maximal but closer to maximum utility that that is far enough from the boundary that it can be relied on.
Right, this is the issue Eliezer discussed at length in various posts, The Hidden Complexity of Wishes is one of the most illuminating. A utility-maximizing genie will always find a cheap way to maximize a given utility by picking one of the “boundaries” the wisher never thought of. And given that there are no obvious boundaries to begin with and so to stay “far enough from”, it would simply pick a convenient point in the parameter space.
How about this as a rule of thumb, pending something more formal:
If a particular reallocation of resources/priority/etc. seems optimal, look for a point in the solution space between there and the status quo that is more optimal than the status quo, go for that point, and re-evaluate from there.
So, looking at shminux’ post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?
If your seemingly-optimal point is repugnant why would you want to go in that direction anyway?
I believe that deliberately increasing population growth is specifically the opposite direction of the one we should be aiming toward if we are to maximize any utility function that penalizes die-offs, at least as long as we are strictly confined to one planet. I was just more interested in the more general point shminux raised about repugnant conclusions and wanted to address that instead of the specifics of this particular repugnant conclusion.
I think the way to maximize the “human integral” is to find the rate of population change at which our chances surviving long enough and ramping up our technological capabilities fast enough to colonize the solar system. That, in turn, will be bounded from above by population growth rates that risk overshoot, civilizational collapse, and die-off and bounded from below by the critical mass necessary for optimum technological progress and the minimum viable population. My guess is that the first of these is the more proximal one.
At any rate, we have to have some better-than-nothing way of handling repugnant conclusions that doesn’t amount to doing nothing and waiting for someone else to come up with all the answers. I also think it’s important to distinguish between optima that are inherently repugnant versus optima that can be non-repugnant but we haven’t been able to think of a non-repugnant path to get from here to there.