Right, and if (some version of) utilitarianism is right, then that’s a good thing. The agent isn’t being exploited, it’s becoming less evil. We definitely want evil agents to roll over and do the right thing instead.
All morality tells you to shut up and do what The Rules say. Preference utilitarianism just has agents inherently included in The Rules.
In fact, the preference utilitarian in your example was able to do the right thing (believe in virtue ethics) only because they were a preference utilitarian. If they had been a deontologist, say, they would have remained evil. How is that self-defeating? It’s an argument in preference utilitarianism’s favor that a sufficiently smart agent can figure out what to do from scratch, i.e. without starting out as a (correct) virtue ethicist.
(Or maybe you’re thinking that believing that utilitarianism does sometimes involve letting others control your actions, makes people more prone to roll over in general. Though to the kind of preference utilitarianism you have in mind, that shouldn’t be too problematic, I think.)
(Another Parfit-like point is that the categorical imperative can have basically the same effect, but in that case you’re limited by this incredibly slippery notion of “similar situation” and so on which lets you make up a lot of bullshit, rather than by whatever population you decide is the one who gets to define morality. (That said I still can’t believe Kant didn’t deal with that gaping hole, so I suppose he must have, somewhere.))
I don’t get it—why are you assuming that virtue ethics or the rules of the people are right such that always converging to them is a good aspect of your morality? Why not assume people are mostly dumb and so utilitarianism takes away any hope you could possibly have of doing the right thing (say, deontology)?
All morality tells you to shut up and do what The Rules say.
Yeah, but meta-ethics is supposed to tell us where The Rules come from, not normative ethics, so normative ethics that implicitly answer the question are, like, duplicitous and bothersome. Or like, maybe I’d be okay with it, but the implicit meta-ethics isn’t at all convincing, and maybe that’s the part that bothers me.
Nevermind, misunderstood your initial comment, I think.
I thought you were saying: if pref-util is right, pref-utilists may self-modify away from it, which refutes pref-util.
I now think you’re saying: we don’t know what is right, but if we assume pref-util, then we’ll lose part of our ability to figure it out, so we shouldn’t do that (yet).
Also, you’re saying that most people don’t understand morality better than us, so we shouldn’t take their opinions more seriously than ours. (Agreed.) But pref-utilists do take those opinions seriously; they’re letting their normative ethics influence their beliefs about their normative ethics. (Well, duh, consequentialism.)
In which case I’d (naively) say, let pref-util redistribute the probability mass you’ve assigned to pref-util any way it wants. If it wants to sacrifice it all for majority opinions, sure, but don’t give it more than that.
Right, and if (some version of) utilitarianism is right, then that’s a good thing. The agent isn’t being exploited, it’s becoming less evil. We definitely want evil agents to roll over and do the right thing instead.
All morality tells you to shut up and do what The Rules say. Preference utilitarianism just has agents inherently included in The Rules.
In fact, the preference utilitarian in your example was able to do the right thing (believe in virtue ethics) only because they were a preference utilitarian. If they had been a deontologist, say, they would have remained evil. How is that self-defeating? It’s an argument in preference utilitarianism’s favor that a sufficiently smart agent can figure out what to do from scratch, i.e. without starting out as a (correct) virtue ethicist.
(Or maybe you’re thinking that believing that utilitarianism does sometimes involve letting others control your actions, makes people more prone to roll over in general. Though to the kind of preference utilitarianism you have in mind, that shouldn’t be too problematic, I think.)
(Another Parfit-like point is that the categorical imperative can have basically the same effect, but in that case you’re limited by this incredibly slippery notion of “similar situation” and so on which lets you make up a lot of bullshit, rather than by whatever population you decide is the one who gets to define morality. (That said I still can’t believe Kant didn’t deal with that gaping hole, so I suppose he must have, somewhere.))
I don’t get it—why are you assuming that virtue ethics or the rules of the people are right such that always converging to them is a good aspect of your morality? Why not assume people are mostly dumb and so utilitarianism takes away any hope you could possibly have of doing the right thing (say, deontology)?
Yeah, but meta-ethics is supposed to tell us where The Rules come from, not normative ethics, so normative ethics that implicitly answer the question are, like, duplicitous and bothersome. Or like, maybe I’d be okay with it, but the implicit meta-ethics isn’t at all convincing, and maybe that’s the part that bothers me.
Nevermind, misunderstood your initial comment, I think.
I thought you were saying: if pref-util is right, pref-utilists may self-modify away from it, which refutes pref-util.
I now think you’re saying: we don’t know what is right, but if we assume pref-util, then we’ll lose part of our ability to figure it out, so we shouldn’t do that (yet).
Also, you’re saying that most people don’t understand morality better than us, so we shouldn’t take their opinions more seriously than ours. (Agreed.) But pref-utilists do take those opinions seriously; they’re letting their normative ethics influence their beliefs about their normative ethics. (Well, duh, consequentialism.)
In which case I’d (naively) say, let pref-util redistribute the probability mass you’ve assigned to pref-util any way it wants. If it wants to sacrifice it all for majority opinions, sure, but don’t give it more than that.