Specifically, while in the preferred world the huge population is glad to have been born, you’re still left with a horribly suffering population.
Considering that world to be an improvement likely still runs counter to most people’s intuition. Does it run counter to yours? I prefer DNT to standard total utilitarianism here, but I don’t endorse either in these conclusions.
My take is that repugnant conclusions as usually stated aren’t too important, since in practice we’re generally dealing with some fixed budget (of energy, computation or similar), so we’ll only need to make practical decisions between such equivalents.
I’m only really worried by worlds that are counter-intuitively preferred after we fix the available resources.
With fixed, limited energy, killing-and-replacing-by-an-equivalent is already going to be a slight negative: you’ve wasted energy to accomplish an otherwise morally neutral act (ETA: I’m wrong here; a kill-and-replace operation could save energy). It’s not clear to me that it needs to be more negative than that (maybe).
Specifically, while in the preferred world the huge population is glad to have been born, you’re still left with a horribly suffering population.
This conclusion seems absolutely fine to me. The above-h0 population has positive value that is greater than the negative value of the horribly suffering population. If someone’s intuition is against that, I suppose it’s a situation similar to torture vs. dust specks: failure to accept that a very bad thing can be compensated by a lot of small good things. I know that, purely selfishly, I would prefer a small improvement with high probability over something terrible with sufficiently tiny probability. Scaling that to a population, we go from probabilities to quantities.
With fixed, limited energy, killing-and-replacing-by-an-equivalent is already going to be a slight negative: you’ve wasted energy to accomplish an otherwise morally neutral act. It’s not clear to me that it needs to be more negative than that (maybe).
I strongly disagree (it is not morally neutral at all) but now sure how to convince you if you don’t already have this intuition.
Oh sure—agreed on both counts. If you’re fine with the very repugnant conclusion after raising the bar on h a little, then it’s no real problem. Similar to dust specks, as you say.
On killing-and-replacement I meant it’s morally neutral in standard total utilitarianism’s terms.
I had been thinking that this wouldn’t be an issue in practice, since there’d be an energy opportunity cost… but of course this isn’t true in general: there’d be scenarios where a kill-and-replace action saved energy. Something like DNT would be helpful in such cases.
Interesting. One issue DNT doesn’t seem to fix is the worst part of the very repugnant conclusion.
Specifically, while in the preferred world the huge population is glad to have been born, you’re still left with a horribly suffering population.
Considering that world to be an improvement likely still runs counter to most people’s intuition. Does it run counter to yours? I prefer DNT to standard total utilitarianism here, but I don’t endorse either in these conclusions.
My take is that repugnant conclusions as usually stated aren’t too important, since in practice we’re generally dealing with some fixed budget (of energy, computation or similar), so we’ll only need to make practical decisions between such equivalents.
I’m only really worried by worlds that are counter-intuitively preferred after we fix the available resources.
With fixed, limited energy, killing-and-replacing-by-an-equivalent is already going to be a slight negative: you’ve wasted energy to accomplish an otherwise morally neutral act (ETA: I’m wrong here; a kill-and-replace operation could save energy). It’s not clear to me that it needs to be more negative than that (maybe).
This conclusion seems absolutely fine to me. The above-h0 population has positive value that is greater than the negative value of the horribly suffering population. If someone’s intuition is against that, I suppose it’s a situation similar to torture vs. dust specks: failure to accept that a very bad thing can be compensated by a lot of small good things. I know that, purely selfishly, I would prefer a small improvement with high probability over something terrible with sufficiently tiny probability. Scaling that to a population, we go from probabilities to quantities.
I strongly disagree (it is not morally neutral at all) but now sure how to convince you if you don’t already have this intuition.
Oh sure—agreed on both counts. If you’re fine with the very repugnant conclusion after raising the bar on h a little, then it’s no real problem. Similar to dust specks, as you say.
On killing-and-replacement I meant it’s morally neutral in standard total utilitarianism’s terms.
I had been thinking that this wouldn’t be an issue in practice, since there’d be an energy opportunity cost… but of course this isn’t true in general: there’d be scenarios where a kill-and-replace action saved energy. Something like DNT would be helpful in such cases.