The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don’t have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.
I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don’t value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I also have a few heuristics. One of them essentially says that given a choice between a loss that is spread out amongst many, and an equal loss divided amongst the few, the former is the more moral choice. Does that help?
It helps me understand your reasoning, yes. But if you aren’t arguing within a fairly consistent utilitarian framework, there’s not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism.
So far it sounds like you’re telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That’s the point.
I’m not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it’s a simple way of satisfying some basic axioms without going completely off the rails in situations that don’t require Knuth up-arrow notation to describe.
But that’s all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don’t actually hold makes less sense than it otherwise would, not more.
I believe I suggested earlier that I don’t know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked “At what point is utilitarianism not completely arbitrary?” because I wanted to know more about utilitarianism. That’s all.
Ah. Well, informally, if you’re interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you’re considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn’t too tricky. I’m not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I’m not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.
No-one asked for a general explanation.
The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don’t have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.
I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don’t value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I also have a few heuristics. One of them essentially says that given a choice between a loss that is spread out amongst many, and an equal loss divided amongst the few, the former is the more moral choice. Does that help?
It helps me understand your reasoning, yes. But if you aren’t arguing within a fairly consistent utilitarian framework, there’s not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism.
So far it sounds like you’re telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That’s the point.
At what point is utilitarianism not completely arbitrary?
I’m not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it’s a simple way of satisfying some basic axioms without going completely off the rails in situations that don’t require Knuth up-arrow notation to describe.
But that’s all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don’t actually hold makes less sense than it otherwise would, not more.
I believe I suggested earlier that I don’t know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked “At what point is utilitarianism not completely arbitrary?” because I wanted to know more about utilitarianism. That’s all.
Ah. Well, informally, if you’re interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you’re considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn’t too tricky. I’m not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I’m not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.
Thank you.