There is a sleight of hand in the quote: it replaces “suboptimal” with immoral, where being optimal means maximizing a specific utility function.
For example, if your (fake) utility function is something like “total number of people on earth by the time you die”, then yes, you should, among other things, forgo personal procreation. Fortunately, very few people have such a simplistic approach to life. If they did, they would find even better ways to maximize this number, like subverting birth control efforts or something.
Or maybe their utility function is the “total number of people who live to be at least 80 years old”, that’s why they want to save people from famine. Or something else.
My point is that the quote is an example of fake consequentialism, where some unspoken deontological or virtue ethics is couched in terms of utilitarianism, but without ever explicitly stating the utility function, because any particular action advocated by its proponents would then be revealed as suboptimal.
I think their professed utility function would be maximizing something more like Quality Adjusted Life Years, under which efficient charity efforts would most likely be more effective than subverting birth control efforts, and it’s certainly within the realm of plausibility that it would be more effective than having children.
However, the usual formulations of QALYs definitely do not adequately capture my own sense of utility, at least. I can’t speak for those behind the calculations.
Right. Maximizing total QALY sounds good on paper, but can probably be gamed as easily as any other utilitarianism. Say, by wire-heading or drugging. Complexity of value and such.
How about the appropriate utility function being maximizing integral of all humans who have ever lived?
Because what does it matter if you maximize the number of people alive in one time interval, or maximize how happy they are, if this is followed by extinction?
How about the appropriate utility function being maximizing integral of all humans who have ever lived?
Any utilitarianism is subject to a repugnant conclusion of some form. In this case mandatory insemination of all fertile females and reducing subsistence to the minimum acceptable caloric levels would likely yield higher “integral of all humans who have ever lived” than famine relief.
But, in any case, I would expect this to lead to the Malthusian scenario we should be trying to avoid, not an overall maximization of all humans who have ever lived.
What if the reason repugnant conclusions come up is that we only have estimates of our real utility functions which are an adequate fit over most of the parameter space but diverge from true utility under certain boundary conditions?
If so, either don’t feel shame about having to patch your utility function with one that does fit better in that region of the parameter space… or aim for a non-maximal but closer to maximum utility that that is far enough from the boundary that it can be relied on.
adequate over most of the parameter space but diverge from true utility under some boundary conditions
Right, this is the issue Eliezer discussed at length in various posts, The Hidden Complexity of Wishes is one of the most illuminating. A utility-maximizing genie will always find a cheap way to maximize a given utility by picking one of the “boundaries” the wisher never thought of. And given that there are no obvious boundaries to begin with and so to stay “far enough from”, it would simply pick a convenient point in the parameter space.
How about this as a rule of thumb, pending something more formal:
If a particular reallocation of resources/priority/etc. seems optimal, look for a point in the solution space between there and the status quo that is more optimal than the status quo, go for that point, and re-evaluate from there.
So, looking at shminux’ post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?
If your seemingly-optimal point is repugnant why would you want to go in that direction anyway?
So, looking at shminux’ post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?
I believe that deliberately increasing population growth is specifically the opposite direction of the one we should be aiming toward if we are to maximize any utility function that penalizes die-offs, at least as long as we are strictly confined to one planet. I was just more interested in the more general point shminux raised about repugnant conclusions and wanted to address that instead of the specifics of this particular repugnant conclusion.
I think the way to maximize the “human integral” is to find the rate of population change at which our chances surviving long enough and ramping up our technological capabilities fast enough to colonize the solar system. That, in turn, will be bounded from above by population growth rates that risk overshoot, civilizational collapse, and die-off and bounded from below by the critical mass necessary for optimum technological progress and the minimum viable population. My guess is that the first of these is the more proximal one.
At any rate, we have to have some better-than-nothing way of handling repugnant conclusions that doesn’t amount to doing nothing and waiting for someone else to come up with all the answers. I also think it’s important to distinguish between optima that are inherently repugnant versus optima that can be non-repugnant but we haven’t been able to think of a non-repugnant path to get from here to there.
Then there is the drive to insure the survival and happiness of your children. I have found that this increases with age. If you don’t have that drive yet, simply wait. There’s a good chance you will be surprised to see that you will develop one, as I have. I imagine this drive undergoes another burst when one’s children have children.
Then there is foreclosing on the possibility of the human race reaching the stars. If that doesn’t excite you, what does? Sports? Video games? I’m sure those will also spread through the galaxy if we do.
Then there there is the possibility that medical science has a breakthrough or two during your lifetime that makes these more than just theoretical futures.
Then there is the drive to insure the survival and happiness of your children
I assume you mean people’s drive to have children, since nobody talked about extinction through killing off anyone. But given what you said—that maximising living people’s happiness doesn’t matter if this is followed by extinction -, one would expect a more principled objection to the latter event.
There’s a good chance you will be surprised to see that you will develop one.
No there isn’t, but this is not to the point here. I’m not committing the typical mind fallacy in denying that many people have one.
I’m sure those will also spread through the galaxy if we do.
I don’t care about spreading anything through the galaxy, actually. I wonder how much the average person does. (I immensely admire certain works of art, for example, and yet, the thought that nobody should be there to enjoy them, or create any more like them, does not bug me in the slightest.)
There is a sleight of hand in the quote: it replaces “suboptimal” with immoral, where being optimal means maximizing a specific utility function.
For example, if your (fake) utility function is something like “total number of people on earth by the time you die”, then yes, you should, among other things, forgo personal procreation. Fortunately, very few people have such a simplistic approach to life. If they did, they would find even better ways to maximize this number, like subverting birth control efforts or something.
Or maybe their utility function is the “total number of people who live to be at least 80 years old”, that’s why they want to save people from famine. Or something else.
My point is that the quote is an example of fake consequentialism, where some unspoken deontological or virtue ethics is couched in terms of utilitarianism, but without ever explicitly stating the utility function, because any particular action advocated by its proponents would then be revealed as suboptimal.
I think their professed utility function would be maximizing something more like Quality Adjusted Life Years, under which efficient charity efforts would most likely be more effective than subverting birth control efforts, and it’s certainly within the realm of plausibility that it would be more effective than having children.
However, the usual formulations of QALYs definitely do not adequately capture my own sense of utility, at least. I can’t speak for those behind the calculations.
Right. Maximizing total QALY sounds good on paper, but can probably be gamed as easily as any other utilitarianism. Say, by wire-heading or drugging. Complexity of value and such.
How about the appropriate utility function being maximizing integral of all humans who have ever lived?
Because what does it matter if you maximize the number of people alive in one time interval, or maximize how happy they are, if this is followed by extinction?
Any utilitarianism is subject to a repugnant conclusion of some form. In this case mandatory insemination of all fertile females and reducing subsistence to the minimum acceptable caloric levels would likely yield higher “integral of all humans who have ever lived” than famine relief.
But, in any case, I would expect this to lead to the Malthusian scenario we should be trying to avoid, not an overall maximization of all humans who have ever lived.
What if the reason repugnant conclusions come up is that we only have estimates of our real utility functions which are an adequate fit over most of the parameter space but diverge from true utility under certain boundary conditions?
If so, either don’t feel shame about having to patch your utility function with one that does fit better in that region of the parameter space… or aim for a non-maximal but closer to maximum utility that that is far enough from the boundary that it can be relied on.
Right, this is the issue Eliezer discussed at length in various posts, The Hidden Complexity of Wishes is one of the most illuminating. A utility-maximizing genie will always find a cheap way to maximize a given utility by picking one of the “boundaries” the wisher never thought of. And given that there are no obvious boundaries to begin with and so to stay “far enough from”, it would simply pick a convenient point in the parameter space.
How about this as a rule of thumb, pending something more formal:
If a particular reallocation of resources/priority/etc. seems optimal, look for a point in the solution space between there and the status quo that is more optimal than the status quo, go for that point, and re-evaluate from there.
So, looking at shminux’ post above, you would suggest mandatory insemination of only some fertile females and reducing subsistence to slightly above the minimum acceptable caloric levels..?
If your seemingly-optimal point is repugnant why would you want to go in that direction anyway?
I believe that deliberately increasing population growth is specifically the opposite direction of the one we should be aiming toward if we are to maximize any utility function that penalizes die-offs, at least as long as we are strictly confined to one planet. I was just more interested in the more general point shminux raised about repugnant conclusions and wanted to address that instead of the specifics of this particular repugnant conclusion.
I think the way to maximize the “human integral” is to find the rate of population change at which our chances surviving long enough and ramping up our technological capabilities fast enough to colonize the solar system. That, in turn, will be bounded from above by population growth rates that risk overshoot, civilizational collapse, and die-off and bounded from below by the critical mass necessary for optimum technological progress and the minimum viable population. My guess is that the first of these is the more proximal one.
At any rate, we have to have some better-than-nothing way of handling repugnant conclusions that doesn’t amount to doing nothing and waiting for someone else to come up with all the answers. I also think it’s important to distinguish between optima that are inherently repugnant versus optima that can be non-repugnant but we haven’t been able to think of a non-repugnant path to get from here to there.
What’s wrong with extinction, except probably that the last generation people (like many others) have sucky lives?
Well, that for starters.
Then there is the drive to insure the survival and happiness of your children. I have found that this increases with age. If you don’t have that drive yet, simply wait. There’s a good chance you will be surprised to see that you will develop one, as I have. I imagine this drive undergoes another burst when one’s children have children.
Then there is foreclosing on the possibility of the human race reaching the stars. If that doesn’t excite you, what does? Sports? Video games? I’m sure those will also spread through the galaxy if we do.
Then there there is the possibility that medical science has a breakthrough or two during your lifetime that makes these more than just theoretical futures.
I assume you mean people’s drive to have children, since nobody talked about extinction through killing off anyone. But given what you said—that maximising living people’s happiness doesn’t matter if this is followed by extinction -, one would expect a more principled objection to the latter event.
No there isn’t, but this is not to the point here. I’m not committing the typical mind fallacy in denying that many people have one.
I don’t care about spreading anything through the galaxy, actually. I wonder how much the average person does. (I immensely admire certain works of art, for example, and yet, the thought that nobody should be there to enjoy them, or create any more like them, does not bug me in the slightest.)