Because it doesn’t seem right to me to create something that will kill off all of humanity even if it would have higher utility.
There are (I feel confident enough to say) 7 billion plus of us actually existing people who are NOT OK with you building something to exterminate us, no matter how good it would feel about doing it.
So, you claim you want to maximize utility, even if that means building something that will kill us all. I doubt that’s really what you’d want if you thought it through. Most of the rest of us don’t want that. But let’s imagine you really do want that. Now let’s imagine you try to go ahead anyway. Then some peasants show up at your Mad Science Laboratory with torches and pitchforks demanding you stop. What are you going to say to them?
Because it doesn’t seem right to me to create something that will kill off all of humanity even if it would have higher utility.
This isn’t really about utility monsters. The same argument will apply, equally well or equally badly, to any situation where we ask, “What do you think about replacing humanity with something better?”
Probably dinosaurs would have objected, if they could, to being replaced by humans which are presumably better than them, but it does not change the fact that the resulting situation is better. And likewise, whether or not humans object to being replaced by something better, it would still be better if it happens.
If it’s “This thing is so great that even all of us humans agree that it killing us off is a good thing” then fine. But if it’s “Better according to an abstract concept (utility maximization) that only a minority of humans agree with, but fuck the rest of humanity, we know what’s better” then that’s not so good.
Sure, we’re happy that the dinosaurs were killed off given that is allows us to replace them. That doesn’t mean the dinosaurs should have welcomed that.
I meant better from the point of view of objective truth, but if you disagree that better in that way is meaningful, we can change it to this:
Something, let’s call it X, comes into existence and replaces humanity. It is better for X to be X, than for humans to be humans.
That is a meaningful comparison in exactly the same way that it is meaningful to say that being a human being is better (for human beings of course) than being a dinosaur is (for dinosaurs of course.)
That does not mean that humans would want X to come into existence, just as dinosaurs might not have wanted to be wiped out. But from a pretty neutral point of view (if we assume being human is better for humans than being a dinosaur is for dinosaurs), there has been improvement since the dinosaurs, and there would be more if X came into existence.
Also, there’s another issue. You seem to be assuming that humans have the possibility of not being replaced. That is not a real possibility. Believing that the human race is permanent is exactly the same kind of wishful thinking as believing that you have an immortal soul. You are going to die, and no part of you will outlive that; and likewise the human race will end, and will not outlive that. So the question is not whether humanity is going to be replaced. It is just whether it will be replaced by something better, or something inferior. I would rather be replaced by something better.
Since I said I would rather be replaced by something better, I meant from my point of view. But one or way another, since we will be replaced by something different, it will be better or worse from pretty much any point of view, except the “nothing matters” point of view.
Regarding your first 4 paragraphs: as it happens, I am human.
Regarding your last paragraph: yes most likely, but we can assess our options from our own point of view. Most likely our own point of view will include, as one part of what we consider, the point of view of what we are choosing to replace us. But it won’t likely be the only consideration.
Haha, yea I agree there are some practical problems.
I just think in the abstract ad absurdum arguments are a logical fallacy. And of course most people on Earth (including myself) are intuitively appalled by the idea, but we really shouldn’t be trusting our intuitions on something like this.
If 100% of humanity are intuitively appalled with an idea, but some of them go ahead and do it anyway, that’s just insanity. If the people going ahead with it think that they need to do it because that’s the morally obligatory thing to do, then they’re fanatic adherents of an insane moral system.
It seems to me that you think that utilitarianism is just abstractly The Right Thing to Do, independently of practical problems, any intuitions to the contrary including your own, and all that. So, why do you think that?
If 100% of humanity are intuitively appalled with an idea, but some of them go ahead and do it anyway, that’s just insanity.
Really? I think almost everyone has things that are intuitively appalling, but they do anyway. Walking by a scruffy, hungry-looking beggar? Drinking alcohol? there’s something that your intuition and your actions disagree on.
Personally, I’m not a utilitarian because I don’t think ANYTHING is the Right Thing to Do—it’s all preferences and private esthetics. But really, if you are a moral realist, you shouldn’t claim that other human’s moral intuitions are binding, you should Do The Right Thing regardless of any disagreement or reprisals. (note: you’re still allowed to not know the Right Thing, but even then you should have some justification other than “feels icky” for whatever you do choose to do).
But, even a moral realist should not have 100% confidence that he/she is correct with respect to what is objectively right to do. The fact that 100% of humanity is morally appalled with an action should at a minimum raise a red flag that the action may not be morally correct.
Similarly, “feeling icky” about something can be a moral intuition that is in disagreement with the course of action dictated by one’s reasoned moral position. it seems to me that “feeling icky” about something is a good reason for a moral realist to reexamine the line of reasoning that led him/her to believe that course of action was morally correct in the first place.
It seems to me that it is folly for a moral realist to ignore his/her own moral intuitions or the moral intuitions of others. Moral realism is about believing that there are objective moral truths. But a person with 100% confidence that he/she knows what those truths are and is unwilling to reconsider them is not just a moral realist, he/she is also a fanatic.
But on your second paragraph: I don’t think I actually disagree with you about what actually exists.
Here are some things that I’m sure you’ll agree exist (or at least can exist):
preferences and esthetics (as you mentioned)
tacitly agreed on patterns of behaviour, or overt codes, that reduce conflict
game theoretic strategies that encourage others to cooperate, and commitment to them either innately or by choice
Now, the term “morality”, and related terms like “right” or “wrong”, could be used to refer to things that don’t exist, or they could be used to refer to things that do exist, like maybe some or all of the the things in that list or other things that are like them and also exist.
Now, let’s consider someone who thinks, “I’m intuitively appalled by this idea, as is everyone else, but I’m going to do it anyway, because that’s the morally obligatory thing to do even though most people don’t think so” and analyze that in terms of things that actually exist.
Some things that actually exist that would be in favour of this point of view are:
an aesthetic preference for a conceptually simple system combined with a willingness to bite really large bullets
a willingness to sacrifice oneself for the greater good
a willingness to sacrifice others for the greater good
a perhaps unconscious tendency to show loyalty for one’s tribe (EA) by sticking to tribal beliefs (Utilitarianism) in the face of reasons to the contrary
Perhaps you could construct a case for that position out of these or other reasons in a way that does not add up to “fanatic adherent of insane moral system” but that’s what it’s looking like to me.
we really shouldn’t be trusting our intuitions on something like this.
I don’t see why not; after all, a person relies on his/her ethical intuitions when selecting a metaethical system like utilitarianism in the first place. Surely someone’s ethical intuition regarding an idea like the one that you propose is at least as relevant as the ethical intuition that would lead a person to choose utilitarianism.
I just think in the abstract ad absurdum arguments are a logical fallacy.
I don’t see why. It appears that you and simon agree that utilitarianism leads to the idea that creating utility monsters is a good idea. But whereas you conclude from your intuition that utilitarianism is correct that we should create utility monsters, simon argues from his intuition that creating a utility monster as you describe is a bad idea to the conclusion that utilitarianism is not a good metaethical system. It would appear that simon’s reasoning mirrors your own.
Like the saying goes—one persons’s modus ponens in another person’s modus tollens.
I would consider the option of creating a utility monster to be a reductio ad absurdum of utlitarianism.
Why?
Because it doesn’t seem right to me to create something that will kill off all of humanity even if it would have higher utility.
There are (I feel confident enough to say) 7 billion plus of us actually existing people who are NOT OK with you building something to exterminate us, no matter how good it would feel about doing it.
So, you claim you want to maximize utility, even if that means building something that will kill us all. I doubt that’s really what you’d want if you thought it through. Most of the rest of us don’t want that. But let’s imagine you really do want that. Now let’s imagine you try to go ahead anyway. Then some peasants show up at your Mad Science Laboratory with torches and pitchforks demanding you stop. What are you going to say to them?
This isn’t really about utility monsters. The same argument will apply, equally well or equally badly, to any situation where we ask, “What do you think about replacing humanity with something better?”
Probably dinosaurs would have objected, if they could, to being replaced by humans which are presumably better than them, but it does not change the fact that the resulting situation is better. And likewise, whether or not humans object to being replaced by something better, it would still be better if it happens.
“Better” from whose perspective?
If it’s “This thing is so great that even all of us humans agree that it killing us off is a good thing” then fine. But if it’s “Better according to an abstract concept (utility maximization) that only a minority of humans agree with, but fuck the rest of humanity, we know what’s better” then that’s not so good.
Sure, we’re happy that the dinosaurs were killed off given that is allows us to replace them. That doesn’t mean the dinosaurs should have welcomed that.
I meant better from the point of view of objective truth, but if you disagree that better in that way is meaningful, we can change it to this:
Something, let’s call it X, comes into existence and replaces humanity. It is better for X to be X, than for humans to be humans.
That is a meaningful comparison in exactly the same way that it is meaningful to say that being a human being is better (for human beings of course) than being a dinosaur is (for dinosaurs of course.)
That does not mean that humans would want X to come into existence, just as dinosaurs might not have wanted to be wiped out. But from a pretty neutral point of view (if we assume being human is better for humans than being a dinosaur is for dinosaurs), there has been improvement since the dinosaurs, and there would be more if X came into existence.
Also, there’s another issue. You seem to be assuming that humans have the possibility of not being replaced. That is not a real possibility. Believing that the human race is permanent is exactly the same kind of wishful thinking as believing that you have an immortal soul. You are going to die, and no part of you will outlive that; and likewise the human race will end, and will not outlive that. So the question is not whether humanity is going to be replaced. It is just whether it will be replaced by something better, or something inferior. I would rather be replaced by something better.
Better or inferior from which point of view?
Since I said I would rather be replaced by something better, I meant from my point of view. But one or way another, since we will be replaced by something different, it will be better or worse from pretty much any point of view, except the “nothing matters” point of view.
Regarding your first 4 paragraphs: as it happens, I am human.
Regarding your last paragraph: yes most likely, but we can assess our options from our own point of view. Most likely our own point of view will include, as one part of what we consider, the point of view of what we are choosing to replace us. But it won’t likely be the only consideration.
Sure. I don’t disagree with that.
Haha, yea I agree there are some practical problems.
I just think in the abstract ad absurdum arguments are a logical fallacy. And of course most people on Earth (including myself) are intuitively appalled by the idea, but we really shouldn’t be trusting our intuitions on something like this.
If 100% of humanity are intuitively appalled with an idea, but some of them go ahead and do it anyway, that’s just insanity. If the people going ahead with it think that they need to do it because that’s the morally obligatory thing to do, then they’re fanatic adherents of an insane moral system.
It seems to me that you think that utilitarianism is just abstractly The Right Thing to Do, independently of practical problems, any intuitions to the contrary including your own, and all that. So, why do you think that?
Really? I think almost everyone has things that are intuitively appalling, but they do anyway. Walking by a scruffy, hungry-looking beggar? Drinking alcohol? there’s something that your intuition and your actions disagree on.
Personally, I’m not a utilitarian because I don’t think ANYTHING is the Right Thing to Do—it’s all preferences and private esthetics. But really, if you are a moral realist, you shouldn’t claim that other human’s moral intuitions are binding, you should Do The Right Thing regardless of any disagreement or reprisals. (note: you’re still allowed to not know the Right Thing, but even then you should have some justification other than “feels icky” for whatever you do choose to do).
But, even a moral realist should not have 100% confidence that he/she is correct with respect to what is objectively right to do. The fact that 100% of humanity is morally appalled with an action should at a minimum raise a red flag that the action may not be morally correct.
Similarly, “feeling icky” about something can be a moral intuition that is in disagreement with the course of action dictated by one’s reasoned moral position. it seems to me that “feeling icky” about something is a good reason for a moral realist to reexamine the line of reasoning that led him/her to believe that course of action was morally correct in the first place.
It seems to me that it is folly for a moral realist to ignore his/her own moral intuitions or the moral intuitions of others. Moral realism is about believing that there are objective moral truths. But a person with 100% confidence that he/she knows what those truths are and is unwilling to reconsider them is not just a moral realist, he/she is also a fanatic.
OK, I guess I was equivocating on intuition.
But on your second paragraph: I don’t think I actually disagree with you about what actually exists.
Here are some things that I’m sure you’ll agree exist (or at least can exist):
preferences and esthetics (as you mentioned)
tacitly agreed on patterns of behaviour, or overt codes, that reduce conflict
game theoretic strategies that encourage others to cooperate, and commitment to them either innately or by choice
Now, the term “morality”, and related terms like “right” or “wrong”, could be used to refer to things that don’t exist, or they could be used to refer to things that do exist, like maybe some or all of the the things in that list or other things that are like them and also exist.
Now, let’s consider someone who thinks, “I’m intuitively appalled by this idea, as is everyone else, but I’m going to do it anyway, because that’s the morally obligatory thing to do even though most people don’t think so” and analyze that in terms of things that actually exist.
Some things that actually exist that would be in favour of this point of view are:
an aesthetic preference for a conceptually simple system combined with a willingness to bite really large bullets
a willingness to sacrifice oneself for the greater good
a willingness to sacrifice others for the greater good
a perhaps unconscious tendency to show loyalty for one’s tribe (EA) by sticking to tribal beliefs (Utilitarianism) in the face of reasons to the contrary
Perhaps you could construct a case for that position out of these or other reasons in a way that does not add up to “fanatic adherent of insane moral system” but that’s what it’s looking like to me.
I don’t see why not; after all, a person relies on his/her ethical intuitions when selecting a metaethical system like utilitarianism in the first place. Surely someone’s ethical intuition regarding an idea like the one that you propose is at least as relevant as the ethical intuition that would lead a person to choose utilitarianism.
I don’t see why. It appears that you and simon agree that utilitarianism leads to the idea that creating utility monsters is a good idea. But whereas you conclude from your intuition that utilitarianism is correct that we should create utility monsters, simon argues from his intuition that creating a utility monster as you describe is a bad idea to the conclusion that utilitarianism is not a good metaethical system. It would appear that simon’s reasoning mirrors your own.
Like the saying goes—one persons’s modus ponens in another person’s modus tollens.