A way to make this argument would be to claim human values about how to interpret human values are themselves complex. As an illustration of this one could point out that the naive utilitarian position in torture-vs.-specks totally disregards the preferences of the 3^^^3 people that pertain to what answer to the dilemma the answerer should give—we’ll assume those 3^^^3 people mostly do not hold naive utilitarian ethics. Then the answerer’s problem is much tougher, because to disregard those people’s preferences he has to be confident that he understands morality better than they do, which, for a naive preference utilitarian, is a self-defeating position.
(And my knowledge of implicit utilitarian meta-ethics gets iffy here, but the naive utilitarian also has no sense in which he could say that choosing specks was wrong, because wrongness is only determined by preferences. He could only say he himself didn’t prefer to do what his ethics told him to do—but his ethics are his preferences, so his claim to not prefer specks would be mostly wrong, otherwise self-contradicting.)
I wrote a post about this, and also about non-obvious and important considerations for the trolley problem. Hopefully sound arguments in this vein will cause people to recognize moral uncertainty and especially meta-ethical uncertainty as a serious problem. The neglect of the subject increases the chance that an FAI team will see a meta-ethical consensus around them when there isn’t one—consider that Eliezer has (purposefully-exaggeratedly?) claimed that meta-ethics is a solved problem, even though folk like Wei Dai disagree.
Actually, re implicit utilitarian meta-ethics, I have some confusions. Assume preference utilitarianism. We’ll say most people think utilitarianism is wrong. They’d prefer you used virtue ethics. They think morality is hella important, moreso than their other preferences—that’s feasible. In such a world, would a preference utilitarian thus be obliged to forget utilitarianism and use virtue ethics? And is he obliged to think about ethics and meta-ethics in the ways preferred by the set of people whose preferences he’s tracking? If so, isn’t utilitarianism rather self-defeating in many possible worlds, including perhaps the world we inhabit?
(Meta-note: considerations like these are what make me think normative ethics without explicit complementary meta-ethics just aren’t a contender for actual morality. Too under-specified, too many holes.)
Why should that be a problem? Consequentialists have no obligation to believe what is true, but only what maximizes utility.
If (some version of) utilitarianism is true, and it maximizes utility to not believe so, then you self-modify to in fact not believe that, and so maximize utility and win. So what?
Believing things is just another action with straightforward consequences and is treated like any other action.
This would only be an issue for utilitarianism if you believed that “X is true” is true iff ideal moral agents believe that X is true. Which would be a weird position, given that even ideal Bayesian agents will rationally believe false things in some worlds.
I guess Parfit’s already said everything that should be said here—we’re almost following him line for line, no? Parfit doesn’t like self-defeating theories is all. Mostly my hidden agenda is to point out that real utilitarianism would not look like choosing torture. It looks like saying “hey people, I’m your servant, tell me what you want me to be and I’ll mold myself into it as best I can”. But that’s really suspect meta-ethically. That’s not what morality is. And I think that becomes clearer when you show where utilitarianism ends up.
“Oh you don’t know what love is—you just do as you’re told.”
ETA: Basically, I’m with Richard Chappell. But, uh, theist—where he says “rational agent upon infinite reflection” or whatever, I say “God”, and that makes for some differences, e.g. moral disagreement works differently. (Also I try to push it up to super mega meta.)
Right, and if (some version of) utilitarianism is right, then that’s a good thing. The agent isn’t being exploited, it’s becoming less evil. We definitely want evil agents to roll over and do the right thing instead.
All morality tells you to shut up and do what The Rules say. Preference utilitarianism just has agents inherently included in The Rules.
In fact, the preference utilitarian in your example was able to do the right thing (believe in virtue ethics) only because they were a preference utilitarian. If they had been a deontologist, say, they would have remained evil. How is that self-defeating? It’s an argument in preference utilitarianism’s favor that a sufficiently smart agent can figure out what to do from scratch, i.e. without starting out as a (correct) virtue ethicist.
(Or maybe you’re thinking that believing that utilitarianism does sometimes involve letting others control your actions, makes people more prone to roll over in general. Though to the kind of preference utilitarianism you have in mind, that shouldn’t be too problematic, I think.)
(Another Parfit-like point is that the categorical imperative can have basically the same effect, but in that case you’re limited by this incredibly slippery notion of “similar situation” and so on which lets you make up a lot of bullshit, rather than by whatever population you decide is the one who gets to define morality. (That said I still can’t believe Kant didn’t deal with that gaping hole, so I suppose he must have, somewhere.))
I don’t get it—why are you assuming that virtue ethics or the rules of the people are right such that always converging to them is a good aspect of your morality? Why not assume people are mostly dumb and so utilitarianism takes away any hope you could possibly have of doing the right thing (say, deontology)?
All morality tells you to shut up and do what The Rules say.
Yeah, but meta-ethics is supposed to tell us where The Rules come from, not normative ethics, so normative ethics that implicitly answer the question are, like, duplicitous and bothersome. Or like, maybe I’d be okay with it, but the implicit meta-ethics isn’t at all convincing, and maybe that’s the part that bothers me.
Nevermind, misunderstood your initial comment, I think.
I thought you were saying: if pref-util is right, pref-utilists may self-modify away from it, which refutes pref-util.
I now think you’re saying: we don’t know what is right, but if we assume pref-util, then we’ll lose part of our ability to figure it out, so we shouldn’t do that (yet).
Also, you’re saying that most people don’t understand morality better than us, so we shouldn’t take their opinions more seriously than ours. (Agreed.) But pref-utilists do take those opinions seriously; they’re letting their normative ethics influence their beliefs about their normative ethics. (Well, duh, consequentialism.)
In which case I’d (naively) say, let pref-util redistribute the probability mass you’ve assigned to pref-util any way it wants. If it wants to sacrifice it all for majority opinions, sure, but don’t give it more than that.
Mostly my hidden agenda is to point out that real utilitarianism would not look like choosing torture. It looks like saying “hey people, I’m your servant, tell me what you want me to be and I’ll mold myself into it as best I can”.
This can also lead to the situation where if everyone decides to be a utilitarian, you wind up with a bunch of people asking each other what they want and answering “I want whatever the group wants”.
A way to make this argument would be to claim human values about how to interpret human values are themselves complex. As an illustration of this one could point out that the naive utilitarian position in torture-vs.-specks totally disregards the preferences of the 3^^^3 people that pertain to what answer to the dilemma the answerer should give—we’ll assume those 3^^^3 people mostly do not hold naive utilitarian ethics. Then the answerer’s problem is much tougher, because to disregard those people’s preferences he has to be confident that he understands morality better than they do, which, for a naive preference utilitarian, is a self-defeating position.
(And my knowledge of implicit utilitarian meta-ethics gets iffy here, but the naive utilitarian also has no sense in which he could say that choosing specks was wrong, because wrongness is only determined by preferences. He could only say he himself didn’t prefer to do what his ethics told him to do—but his ethics are his preferences, so his claim to not prefer specks would be mostly wrong, otherwise self-contradicting.)
I wrote a post about this, and also about non-obvious and important considerations for the trolley problem. Hopefully sound arguments in this vein will cause people to recognize moral uncertainty and especially meta-ethical uncertainty as a serious problem. The neglect of the subject increases the chance that an FAI team will see a meta-ethical consensus around them when there isn’t one—consider that Eliezer has (purposefully-exaggeratedly?) claimed that meta-ethics is a solved problem, even though folk like Wei Dai disagree.
Actually, re implicit utilitarian meta-ethics, I have some confusions. Assume preference utilitarianism. We’ll say most people think utilitarianism is wrong. They’d prefer you used virtue ethics. They think morality is hella important, moreso than their other preferences—that’s feasible. In such a world, would a preference utilitarian thus be obliged to forget utilitarianism and use virtue ethics? And is he obliged to think about ethics and meta-ethics in the ways preferred by the set of people whose preferences he’s tracking? If so, isn’t utilitarianism rather self-defeating in many possible worlds, including perhaps the world we inhabit?
(Meta-note: considerations like these are what make me think normative ethics without explicit complementary meta-ethics just aren’t a contender for actual morality. Too under-specified, too many holes.)
Why should that be a problem? Consequentialists have no obligation to believe what is true, but only what maximizes utility.
If (some version of) utilitarianism is true, and it maximizes utility to not believe so, then you self-modify to in fact not believe that, and so maximize utility and win. So what?
Believing things is just another action with straightforward consequences and is treated like any other action.
This would only be an issue for utilitarianism if you believed that “X is true” is true iff ideal moral agents believe that X is true. Which would be a weird position, given that even ideal Bayesian agents will rationally believe false things in some worlds.
I guess Parfit’s already said everything that should be said here—we’re almost following him line for line, no? Parfit doesn’t like self-defeating theories is all. Mostly my hidden agenda is to point out that real utilitarianism would not look like choosing torture. It looks like saying “hey people, I’m your servant, tell me what you want me to be and I’ll mold myself into it as best I can”. But that’s really suspect meta-ethically. That’s not what morality is. And I think that becomes clearer when you show where utilitarianism ends up.
“Oh you don’t know what love is—you just do as you’re told.”
ETA: Basically, I’m with Richard Chappell. But, uh, theist—where he says “rational agent upon infinite reflection” or whatever, I say “God”, and that makes for some differences, e.g. moral disagreement works differently. (Also I try to push it up to super mega meta.)
Right, and if (some version of) utilitarianism is right, then that’s a good thing. The agent isn’t being exploited, it’s becoming less evil. We definitely want evil agents to roll over and do the right thing instead.
All morality tells you to shut up and do what The Rules say. Preference utilitarianism just has agents inherently included in The Rules.
In fact, the preference utilitarian in your example was able to do the right thing (believe in virtue ethics) only because they were a preference utilitarian. If they had been a deontologist, say, they would have remained evil. How is that self-defeating? It’s an argument in preference utilitarianism’s favor that a sufficiently smart agent can figure out what to do from scratch, i.e. without starting out as a (correct) virtue ethicist.
(Or maybe you’re thinking that believing that utilitarianism does sometimes involve letting others control your actions, makes people more prone to roll over in general. Though to the kind of preference utilitarianism you have in mind, that shouldn’t be too problematic, I think.)
(Another Parfit-like point is that the categorical imperative can have basically the same effect, but in that case you’re limited by this incredibly slippery notion of “similar situation” and so on which lets you make up a lot of bullshit, rather than by whatever population you decide is the one who gets to define morality. (That said I still can’t believe Kant didn’t deal with that gaping hole, so I suppose he must have, somewhere.))
I don’t get it—why are you assuming that virtue ethics or the rules of the people are right such that always converging to them is a good aspect of your morality? Why not assume people are mostly dumb and so utilitarianism takes away any hope you could possibly have of doing the right thing (say, deontology)?
Yeah, but meta-ethics is supposed to tell us where The Rules come from, not normative ethics, so normative ethics that implicitly answer the question are, like, duplicitous and bothersome. Or like, maybe I’d be okay with it, but the implicit meta-ethics isn’t at all convincing, and maybe that’s the part that bothers me.
Nevermind, misunderstood your initial comment, I think.
I thought you were saying: if pref-util is right, pref-utilists may self-modify away from it, which refutes pref-util.
I now think you’re saying: we don’t know what is right, but if we assume pref-util, then we’ll lose part of our ability to figure it out, so we shouldn’t do that (yet).
Also, you’re saying that most people don’t understand morality better than us, so we shouldn’t take their opinions more seriously than ours. (Agreed.) But pref-utilists do take those opinions seriously; they’re letting their normative ethics influence their beliefs about their normative ethics. (Well, duh, consequentialism.)
In which case I’d (naively) say, let pref-util redistribute the probability mass you’ve assigned to pref-util any way it wants. If it wants to sacrifice it all for majority opinions, sure, but don’t give it more than that.
This can also lead to the situation where if everyone decides to be a utilitarian, you wind up with a bunch of people asking each other what they want and answering “I want whatever the group wants”.