In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so.
I dare to say that no self-professed “total utilitarian” actually aliefs this.
I know total utilitarians who’d have no problem with that.
Imagine simulated minds instead of carbon-based ones. If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I’m not a total utilitarian, but I don’t think that’s a particularly problematic aspect of it.
My problem with total hedonistic utiltiarianism is the following: Imagine a planet full of beings living in terrible suffering. You have the choice to either euthanize them all (or just make them happy), or let them go on living forever, while also creating a sufficiently huge number of beings with lives barely worth living somewhere else. Now that I find unacceptable. I don’t think you do anything good by bringing a happy being into existence.
If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I’m not a total utilitarian, but I don’t think that’s a particularly problematic aspect of it.
As someone who plans on uploading eventually, if the technology comes around… no. Still feels like murder.
This is problematic. If bringing a happy being into existence doesn’t do anything good, and bringing a neutral being into existence doesn’t do anything bad, what do you do when you switch a planned neutral being for a planned happy being? For instance, you set aside some money to fund your unborn child’s education at the College of Actually Useful Skills.
Good catch, I’m well aware of that. I didn’t say that I think bringing a neutral being into existence is neutral. If the neutral being’s life contains suffering, then the suffering counts negatively. Prior-existence views seem to not work without the inconsistency you pointed out. The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism. Which has its own repugnant conclusions (e.g. anti-natalism), but for several reasons I find those easier to accept.
The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism
As I said, any preferences that can be cast into utility function form are consistent. You seem to be adding extra requirements for this “consistency”.
I should qualify my statement. I was talking only about the common varieties of utilitarianism and I may well have omitted consistent variants that are unpopular or weird (e.g. something like negative average preference-utilitarianism). Basically my point was that “hybrid-views” like prior-existence (or “critical level” negative utiltiarianism) run into contradictions. Most forms of average utilitarianism aren’t contradictory, but they imply an obvious absurdity: A world with one being in maximum suffering would be [edit:] worse than a world with a billion beings in suffering that’s just slightly less awful.
Not in the formal sense. I meant for instance what Will_Savin pointed out above, a neutral life (a lot of suffering and a lot of happiness) being equally worthy of creating as a happy one (mainly just happiness, very little suffering). Or for “critical levels” (which also refers to the infamous dust specks), see section VI of this paper, where you get different results depending on how you start aggregating. And Peter Singer’s prior-existence view seems to contain a “contradiction” (maybe “absurdity” is better) as well having to do with replaceability, but that would take me a while to explain.
It’s not quite a contradiction that the theory states “do X and not-X”, but it’s obvious enough that something doesn’t add up. I hope that led to some clarification, sorry for my terminology.
Assuming perfection in the methods, ending N lives and replacing them with N+1 equally happy lives doesn’t bother me. Death isn’t positive or negative except in as much as it removes the chance of future joy/suffering by the one killed and saddens those left behind.
With physical humans you won’t have perfect methods and any attempt to apply this will end in tragedy. But with AIs (emulated brains or fully artificial) it might well apply.
I dare to say that no self-professed “total utilitarian” actually aliefs this.
I know total utilitarians who’d have no problem with that. Imagine simulated minds instead of carbon-based ones. If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I’m not a total utilitarian, but I don’t think that’s a particularly problematic aspect of it.
My problem with total hedonistic utiltiarianism is the following: Imagine a planet full of beings living in terrible suffering. You have the choice to either euthanize them all (or just make them happy), or let them go on living forever, while also creating a sufficiently huge number of beings with lives barely worth living somewhere else. Now that I find unacceptable. I don’t think you do anything good by bringing a happy being into existence.
As someone who plans on uploading eventually, if the technology comes around… no. Still feels like murder.
This is problematic. If bringing a happy being into existence doesn’t do anything good, and bringing a neutral being into existence doesn’t do anything bad, what do you do when you switch a planned neutral being for a planned happy being? For instance, you set aside some money to fund your unborn child’s education at the College of Actually Useful Skills.
Good catch, I’m well aware of that. I didn’t say that I think bringing a neutral being into existence is neutral. If the neutral being’s life contains suffering, then the suffering counts negatively. Prior-existence views seem to not work without the inconsistency you pointed out. The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism. Which has its own repugnant conclusions (e.g. anti-natalism), but for several reasons I find those easier to accept.
As I said, any preferences that can be cast into utility function form are consistent. You seem to be adding extra requirements for this “consistency”.
I should qualify my statement. I was talking only about the common varieties of utilitarianism and I may well have omitted consistent variants that are unpopular or weird (e.g. something like negative average preference-utilitarianism). Basically my point was that “hybrid-views” like prior-existence (or “critical level” negative utiltiarianism) run into contradictions. Most forms of average utilitarianism aren’t contradictory, but they imply an obvious absurdity: A world with one being in maximum suffering would be [edit:] worse than a world with a billion beings in suffering that’s just slightly less awful.
That last sentence didn’t make sense to me when I first looked at this. Think you must mean “worse”, not “better”.
Indeed, thanks.
I’m still vague on what you mean by “contradictions”.
Not in the formal sense. I meant for instance what Will_Savin pointed out above, a neutral life (a lot of suffering and a lot of happiness) being equally worthy of creating as a happy one (mainly just happiness, very little suffering). Or for “critical levels” (which also refers to the infamous dust specks), see section VI of this paper, where you get different results depending on how you start aggregating. And Peter Singer’s prior-existence view seems to contain a “contradiction” (maybe “absurdity” is better) as well having to do with replaceability, but that would take me a while to explain. It’s not quite a contradiction that the theory states “do X and not-X”, but it’s obvious enough that something doesn’t add up. I hope that led to some clarification, sorry for my terminology.
Ah, I see. Anti-natalism is certainly consistent, though I find it even more repugnant.
Assuming perfection in the methods, ending N lives and replacing them with N+1 equally happy lives doesn’t bother me. Death isn’t positive or negative except in as much as it removes the chance of future joy/suffering by the one killed and saddens those left behind.
With physical humans you won’t have perfect methods and any attempt to apply this will end in tragedy. But with AIs (emulated brains or fully artificial) it might well apply.