My brain spontaneously generated an argument for why killing all humans might be the best way to satisfy my values. As far as I know it’s original; at any rate, I don’t recall seeing it before. I don’t think it actually works, and I’m not going to post it on the public internet. I’m happy to just never speak of it again, but is there something else I should do?
Playing devil’s advocate here, the original poster is not that wrong. Ask any other living species on Earth and they will say their life would be better without humans around.
Apart from the fact that they wouldn’t say anything (because generally animals can’t speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species’ existence has made the lives of other animals much better than they would otherwise be. I’m thinking of veterinary clinics that often perform work on wild animals, pets that don’t have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.
As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world’s predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity’s existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature’s whims perhaps indefinitely, rather than ours perhaps temporarily.
It seems I was unclear. I have no intention of attempting to kill all humans. I’m not posting the argument publicly because I don’t want to run the (admittedly small) risk that someone else will read it and take it seriously. I’m just wondering if there’s anything I can do with this argument that will make the world a slightly better place, instead of just not sharing it (which is mildly negative to me and neutral to everyone else—unless I’ve sparked anyone’s curiousity, for which I apologise).
In The Open Society and its Enemies (1945), Karl Popper argued that the principle “maximize pleasure” should be replaced by “minimize pain”. He thought “it is not only impossible but very dangerous to attempt to maximize the pleasure or the happiness of the people, since such an attempt must lead to totalitarianism.”[67] [...]
The actual term negative utilitarianism was introduced by R.N.Smart as the title to his 1958 reply to Popper[69] in which he argued that the principle would entail seeking the quickest and least painful method of killing the entirety of humanity.
Suppose that a ruler controls a weapon capable of instantly and painlessly destroying the human race. Now it is empirically certain that there would be some suffering before all those alive on any proposed destruction day were to die in the natural course of events. Consequently the use of the weapon is bound to diminish suffering, and would be the ruler’s duty on NU grounds.[70]
(Pretty cute wind-up on Smart’s part; grab Popper’s argument that to avoid totalitarianism we should minimize pain, not maximize happiness, then turn it around on Popper by counterarguing that his argument obliges the obliteration of humanity whenever feasible!)
I feel like that’s a value that only works because of scope insensitivity. If the extinction of a species is as bad as killing x individuals, then when the size of the population is not near x, one of those things will dominate. But people still think about it as if they’re both significant.
I suspect that the reason we have stronger prohibitions against genocide than against random mass murder of equivalent size is not that genocide is worse, but that it is more common.
It’s easier to form, motivate, and communicate the idea “Kill all the Foos!” (where there are, say, a million identifiable Foos in the country) than it is to form and communicate “Kill a million arbitrary people.”
I suspect that the reason we have stronger prohibitions against genocide than against random mass murder of equivalent size is not that genocide is worse, but that it is more common.
I suspect that’s not actually true. The communist governments killed a lot of people in a (mostly) non-genocidal manner.
The reason we have stronger prohibitions against genocide is the same reason we have stronger prohibitions against the swastika than against the hammer and sickle. Namely, the Nazis were defeated and no longer able to defend their actions in debates while the communists had a lot of time to produce propaganda.
But only one of them got big media attention. Which made it the evil.
Cynically speaking: if you want the world to not pay attention to a genocide, (a) don’t do it in a first-world country, and (b) don’t do it during the war with other side which can make condemning the genocide part of their propaganda, especially if at the end you lose the war.
I am not assigning any rights to memes. I am saying that, as a human, I value some memes. I also value the diversity of the meme ecosystem and the potential for me to go and get acquainted with new memes which will be fresh and potentially interesting to me.
Why some memes and not others—well, that flows out of my value system and personal idiosyncrasies. Some things I find interesting and some I don’t—but how that’s relevant?
A fair number of people believe that it’s a moral issue if people wipe out a species, though I’m not sure if I can formalize an argument for that point of view. Anyone have some thoughts on the subject?
Let’s suppose for a moment that’s what Username meant. If Username deems other beings to be more valuable than humans, then Username, as a human, will have a hard time convincing hirself of pursuing hir own values. So I guess we’re safe.
I’m not going to say what the values are, beyond that I don’t think they would be surprising for a LWer to hold. Also, yes, you’re safe.
But it seems like you started with disbelief in X, and you were given an example of X, and your reaction should be to now assume that there are more examples of X; and it looks like instead, you’re attempting to reason about class X based on features of a particular instance of it.
I thought it was clear that “Username deems other beings to be more valuable than humans” was a particular instance of X, not a description of the entire class.
If you have larger worries about your mental heath or are worried that you might do something Very Bad, you should consider seeking mental assistance. I don’t know the best course there (actually, that would be a great page for someone to write up) but I’m sure there are several people here who could point you in a good direction.
If your name is Leó Szilárd and you wish to register a Omega-class Dangerous Idea™ with the Secret Society of Sinister Scheme Suppressors, I do not believe they exist. Anyone claiming to be a society representative is actually a 4chan troll who will post the idea on a 30 meter billboard in downtown Hong Kong just to mock you. An argument simple enough to be generated spontaneously in your brain is probably loose in the wild already and not very dangerous. To play it safe, stay quiet and think.
If you’re asking because you’ve just thought of this neat thing and you want to share it with someone, but are worried you might look a bit bad, I’m sure plenty of people here would be happy to read your argument in a private message.
Do you care about it? It sounds like you’re responding appropriately (though IMO it’s better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation). If the generation of that argument, or what it implies about your brain, is causing trouble with your life then it’s worth investigating, but if it’s not bothering you then such investigation might not be worth the cost.
though IMO it’s better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation
This is the sort of thing I’m thinking about. The argument seems more robust than the obvious-to-me counterargument, so I feel that it’s better to just not set people thinking about it. I’m not sure though.
If the argument is simple enough for your brain to generate it spontaneously, someone else has probably thought of it before and not released a mind plague upon humanity. There could even be an established literature on the subject in philosophy journals. Have you done a search?
The argument may not have good keywords and be ungooglable. If that’s the case, you could (a) discuss with a friendly neighborhood professional philosopher or (2) pay a philosophy grad student $25 to bounce your idea off them.
I quickly brainstormed 6 (rather bad) reasons killing everyone in the world would satisfy someone’s values. How do these reasons compare in persuasiveness? If your reason isn’t much better than I don’t think you have much to worry about.
Since you won’t be able to kill all humans and will eventually get caught and imprisoned, the best move is to abandon your plan, accordingo to utilitarian logic.
I’m not so sure this is obvious. How much damage can one intelligent, rational, and extremely devoted person do? Certainly there are a few people in positions that obviously allow them to wipe out large swaths of humanity. Of course, getting to those positions isn’t easy (yet still feasible given an early enough start!).. But I’ve thought about this for maybe two minutes, how many nonobvious ways would there be for someone willing to put in decades?
The usual way to rule them out without actually putting in the decades is by taking outside view and pointing at all the failures. But nobody even seems to have seriously tried. If they had, we’d have at least seen partial successes.
Reform yourself. Killing all humans is axiomatically evil in my playbook, so eithar (a) you are reasoning from principles which permit Mark!evil (which makes you Mark!evil, and on my watch-list), or (b) you made a mistake. It’s probably the latter.
My brain spontaneously generated an argument for why killing all humans might be the best way to satisfy my values. As far as I know it’s original; at any rate, I don’t recall seeing it before. I don’t think it actually works, and I’m not going to post it on the public internet. I’m happy to just never speak of it again, but is there something else I should do?
Find out how your brain went wrong, with a view to not going so wrong again.
Playing devil’s advocate here, the original poster is not that wrong. Ask any other living species on Earth and they will say their life would be better without humans around.
Apart from the fact that they wouldn’t say anything (because generally animals can’t speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species’ existence has made the lives of other animals much better than they would otherwise be. I’m thinking of veterinary clinics that often perform work on wild animals, pets that don’t have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.
As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world’s predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity’s existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature’s whims perhaps indefinitely, rather than ours perhaps temporarily.
Never thought of it this way. Guess in the long term it makes sense. So far, though...
Let’s ask a cockroach, a tapeworm, and a decorative-breed dog :-)
Humans are leading to the extinction of many species. Given the sorts of things that happen to them in the wild, this may be an improvement.
This is too distant from the original argument to be an argument for it. I’m just playing devil’s advocate recursively.
It seems I was unclear. I have no intention of attempting to kill all humans. I’m not posting the argument publicly because I don’t want to run the (admittedly small) risk that someone else will read it and take it seriously. I’m just wondering if there’s anything I can do with this argument that will make the world a slightly better place, instead of just not sharing it (which is mildly negative to me and neutral to everyone else—unless I’ve sparked anyone’s curiousity, for which I apologise).
What values could possibly lead to such a choice?
Hardcore negative utilitarianism?
(Pretty cute wind-up on Smart’s part; grab Popper’s argument that to avoid totalitarianism we should minimize pain, not maximize happiness, then turn it around on Popper by counterarguing that his argument obliges the obliteration of humanity whenever feasible!)
Values that value animals as high or nearly as high as humans.
Not if you account for the typical suffering in nature. Humans remain the animals’ best hope of ever escaping that.
It might not just be about suffering—there’s also the plausible claim that humans lead to less variety in other species.
I feel like that’s a value that only works because of scope insensitivity. If the extinction of a species is as bad as killing x individuals, then when the size of the population is not near x, one of those things will dominate. But people still think about it as if they’re both significant.
Why does that, um, matter?
I can see valuing animal experience, but that’s all about individual animals. Species don’t have moral value, and nature as a whole certainly doesn’t.
Would you say the same about groups of humans? Is genocide worse than killing an equal number of humans but not exterminating any one group?
I suspect that the reason we have stronger prohibitions against genocide than against random mass murder of equivalent size is not that genocide is worse, but that it is more common.
It’s easier to form, motivate, and communicate the idea “Kill all the Foos!” (where there are, say, a million identifiable Foos in the country) than it is to form and communicate “Kill a million arbitrary people.”
I suspect that’s not actually true. The communist governments killed a lot of people in a (mostly) non-genocidal manner.
The reason we have stronger prohibitions against genocide is the same reason we have stronger prohibitions against the swastika than against the hammer and sickle. Namely, the Nazis were defeated and no longer able to defend their actions in debates while the communists had a lot of time to produce propaganda.
Wait, what? Did considering genocide more heinous than regular mass murder only start with the end of WWII?
For that it’s worth, the word genocide may been invented to describe what the Nazis did—anyone have OED access to check for earlier cites?
It existed before, but it’s use really picked up after WWII.
Unfortunately, genocides happen all the time.
But only one of them got big media attention. Which made it the evil.
Cynically speaking: if you want the world to not pay attention to a genocide, (a) don’t do it in a first-world country, and (b) don’t do it during the war with other side which can make condemning the genocide part of their propaganda, especially if at the end you lose the war.
Alternatively, killing a million people at semi-random (through poverty or war) is less conspicuous than going after a defined group.
I don’t see why it should be.
Do particular cultures or, say, languages, have any value to you?
Nailed it. By which I mean, this is the standard argument. I’m surprised nobody brought it up earlier.
Do particular computer systems or, say, programming languages, have any value to you?
Compare your attitude to these two questions, what accounts for the difference?
The fact that I am human.
And..?
And what? You’re a human not a meme, so why are you assigning rights to memes? And why some memes and not others?
I am not assigning any rights to memes. I am saying that, as a human, I value some memes. I also value the diversity of the meme ecosystem and the potential for me to go and get acquainted with new memes which will be fresh and potentially interesting to me.
Why some memes and not others—well, that flows out of my value system and personal idiosyncrasies. Some things I find interesting and some I don’t—but how that’s relevant?
So why should anyone else care about your personal favorite set of favored memes?
A fair number of people believe that it’s a moral issue if people wipe out a species, though I’m not sure if I can formalize an argument for that point of view. Anyone have some thoughts on the subject?
… one way or another.
Given how long they don’t live, I’d be satisfied with just preventing any further generations.
Let’s suppose for a moment that’s what Username meant. If Username deems other beings to be more valuable than humans, then Username, as a human, will have a hard time convincing hirself of pursuing hir own values. So I guess we’re safe.
I’m not going to say what the values are, beyond that I don’t think they would be surprising for a LWer to hold. Also, yes, you’re safe.
But it seems like you started with disbelief in X, and you were given an example of X, and your reaction should be to now assume that there are more examples of X; and it looks like instead, you’re attempting to reason about class X based on features of a particular instance of it.
I thought it was clear that “Username deems other beings to be more valuable than humans” was a particular instance of X, not a description of the entire class.
I’d say not to worry about it unless it’s a repetitive thought.
You should consider that the problem may not be in the argument, but in your beliefs about the values you think you have.
I have considered that, and I don’t think it’s a relevant issue in this particular case.
Why are you asking this question?
If you have larger worries about your mental heath or are worried that you might do something Very Bad, you should consider seeking mental assistance. I don’t know the best course there (actually, that would be a great page for someone to write up) but I’m sure there are several people here who could point you in a good direction.
If your name is Leó Szilárd and you wish to register a Omega-class Dangerous Idea™ with the Secret Society of Sinister Scheme Suppressors, I do not believe they exist. Anyone claiming to be a society representative is actually a 4chan troll who will post the idea on a 30 meter billboard in downtown Hong Kong just to mock you. An argument simple enough to be generated spontaneously in your brain is probably loose in the wild already and not very dangerous. To play it safe, stay quiet and think.
If you’re asking because you’ve just thought of this neat thing and you want to share it with someone, but are worried you might look a bit bad, I’m sure plenty of people here would be happy to read your argument in a private message.
Do you care about it? It sounds like you’re responding appropriately (though IMO it’s better that such arguments be public and be refuted publicly, as otherwise they present a danger to people who are smart or lucky enough to think up the argument but not the refutation). If the generation of that argument, or what it implies about your brain, is causing trouble with your life then it’s worth investigating, but if it’s not bothering you then such investigation might not be worth the cost.
This is the sort of thing I’m thinking about. The argument seems more robust than the obvious-to-me counterargument, so I feel that it’s better to just not set people thinking about it. I’m not sure though.
If the argument is simple enough for your brain to generate it spontaneously, someone else has probably thought of it before and not released a mind plague upon humanity. There could even be an established literature on the subject in philosophy journals. Have you done a search?
The argument may not have good keywords and be ungooglable. If that’s the case, you could (a) discuss with a friendly neighborhood professional philosopher or (2) pay a philosophy grad student $25 to bounce your idea off them.
I quickly brainstormed 6 (rather bad) reasons killing everyone in the world would satisfy someone’s values. How do these reasons compare in persuasiveness? If your reason isn’t much better than I don’t think you have much to worry about.
Since you won’t be able to kill all humans and will eventually get caught and imprisoned, the best move is to abandon your plan, accordingo to utilitarian logic.
I’m not so sure this is obvious. How much damage can one intelligent, rational, and extremely devoted person do? Certainly there are a few people in positions that obviously allow them to wipe out large swaths of humanity. Of course, getting to those positions isn’t easy (yet still feasible given an early enough start!).. But I’ve thought about this for maybe two minutes, how many nonobvious ways would there be for someone willing to put in decades?
The usual way to rule them out without actually putting in the decades is by taking outside view and pointing at all the failures. But nobody even seems to have seriously tried. If they had, we’d have at least seen partial successes.
Reform yourself. Killing all humans is axiomatically evil in my playbook, so eithar (a) you are reasoning from principles which permit Mark!evil (which makes you Mark!evil, and on my watch-list), or (b) you made a mistake. It’s probably the latter.