Fair enough. I should have been more specific. I’m a particularist who thinks consequentialist reasoning is appropriate in certain contexts, but deontological reasoning is appropriate in other contexts. So I’m pretty sure “Other” is the right pick for me.
Is that possible? Can you both think a) that one should in general act so as to maximise happiness/utility/whatever, and b) there are no general moral rules?
Consequentialism doesn’t require a commitment to maximization of any particular variable. It’s the claim that only the consequences of actions are relevant to moral evaluation of the actions. I think that’s a weak enough claim that you can’t really call it a general moral principle. So one could believe that only consequences are morally relevant, but the way in which one evaluates actions based on their consequences does not conform to any general principle.
If Luke had said that he’s a utilitarian who is also a particularist, that would have been a contradiction.
I think that’s a weak enough claim that you can’t really call it a general moral principle.
That’s a good point. So I should take from Luke’s claim that he does not believe one should (as a moral rule) maximise expected utility, or anything like that? And that he would say that it’s possible (if perhaps unlikely) for an action to be good even if it minimizes expected utility?
Moral particularism
For the record, I consider myself a consequentialist who is also a moral particularist.
Fair enough. I should have been more specific. I’m a particularist who thinks consequentialist reasoning is appropriate in certain contexts, but deontological reasoning is appropriate in other contexts. So I’m pretty sure “Other” is the right pick for me.
Is that possible? Can you both think a) that one should in general act so as to maximise happiness/utility/whatever, and b) there are no general moral rules?
I think that’s a contradiction.
Consequentialism doesn’t require a commitment to maximization of any particular variable. It’s the claim that only the consequences of actions are relevant to moral evaluation of the actions. I think that’s a weak enough claim that you can’t really call it a general moral principle. So one could believe that only consequences are morally relevant, but the way in which one evaluates actions based on their consequences does not conform to any general principle.
If Luke had said that he’s a utilitarian who is also a particularist, that would have been a contradiction.
That’s a good point. So I should take from Luke’s claim that he does not believe one should (as a moral rule) maximise expected utility, or anything like that? And that he would say that it’s possible (if perhaps unlikely) for an action to be good even if it minimizes expected utility?
I probably shouldn’t speak for Luke, but I’m guessing the answer to this is yes. If it isn’t, then I don’t understand how he’s a particularist.
I don’t see why he should be committed to this claim.
I took it to mean that Luke is requiring an agent to be at least somewhat consequentialist before he even thinks of it in terms of a morality.