No, I’m pretty sure it makes you notice. It’s “enough”. “barely enough”, but still “enough”. However, that doesn’t seem to be what’s really important. If I consider you to be correct in your interpretation of the dilemma, in that there are no other side effects, then yes, the 3^^^3 people getting dust in their eyes is a much better choice.
The thought experiment is, 3^^^3 bad events, each just so bad that you notice their badness. Considering consequences of the particular bad thing means that in fact there are other things as well that are depending on your choice, and that’s a different thought experiment.
That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is… insanely contrived. More contrived than the dilemma itself.
However, let’s say that instead of 3^^^3 people getting dust in their eye, 3^^^3 people experience a single nano-second of despair, which is immediately erased from their memory to prevent any psychological damage. If I had a choice between that and torturing a person for 50 years, then I would probably choose the former.
That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is… insanely contrived. More contrived than the dilemma itself.
The notion of 3^^^3 events of any sort is far more contrived than the elimination of knock-on effects of an event. There isn’t enough matter in the universe to make that many dust specks, let alone the eyes to be hit and nervous systems to experience it. Of course it’s contrived. It’s a thought experiment. I don’t assert that the original formulation makes it entirely clear; my point is to keep the focus on the actual relevant bit of the experiment—if you wander, you’re answering a less interesting question.
I don’t agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn’t enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person’s idea of ‘bad’ does not fit my model of how causality works. That seems like a worse infraction, to me.
However, all of that is irrelevant, because I answered the more “interesting question” in the comment you quoted. To be blunt, why are we still talking about this?
I don’t agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn’t enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person’s idea of ‘bad’ does not fit my model of how causality works. That seems like a worse infraction, to me.
I’m not sure I agree, but “which impossible thing is more impossible” does seem an odd thing to be arguing about, so I’ll not go into the reasons unless someone asks for them.
However, all of that is irrelevant, because I answered the more “interesting question” in the comment you quoted. To be blunt, why are we still talking about this?
I meant a more generalized you, in my last sentence. You in particular did indeed answer the more interesting question.
Yes. I believe that because any suffering caused by the 3^^^3 dust specks is spread across 3^^^3 people, it is of lesser evil than torturing a man for 50 years. Assuming there to be no side effects to the dust specks.
When I participated in this debate, this post convinced me that a utilitarian must believe that dust specks cause more overall suffering (or whatever badness measure you prefer). Since I already wasn’t a utilitarian, this didn’t bother me.
As a utilitarian (in broad strokes), I agree, and this doesn’t bother me because this example is so far out of the range of what is possible that I don’t object to saying, “yes, somewhere out there torture might be a better choice.” I don’t have to worry about that changing what the answer is around these parts.
That’s not quite what I meant by “explain”—I had understood that to be your position, and was trying to get insight into your reasoning.
Drawing an analogy to mathematics, would you say that this is an axiom, or a theorem?
If an axiom, it clearly must be produced by a schema of some sort (as you clearly don’t have 3^^^3 incompressible rules in your head). Can you explore somewhat the nature of that schema?
If a theorem, what sort of axioms, and how arranged, produce it?
That’s not general enough to mean very much: it fits a number of deontological moral theories and a few utilitarian ones (what the right answer within virtue ethics is is far too dependent on assumptions to mean much), and seems to fit a number of others if you don’t look too closely. Its validity depends greatly on which you’ve picked.
As best I can tell the most common utilitarian objection to TvDS is to deny that Specks are individually of moral significance, which seems to me to miss the point rather badly. Another is to treat various kinds of disutility as incommensurate with each other, which is at least consistent with the spirit of the argument but leads to some rather weird consequences around the edge cases.
The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don’t have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.
I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don’t value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I also have a few heuristics. One of them essentially says that given a choice between a loss that is spread out amongst many, and an equal loss divided amongst the few, the former is the more moral choice. Does that help?
It helps me understand your reasoning, yes. But if you aren’t arguing within a fairly consistent utilitarian framework, there’s not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism.
So far it sounds like you’re telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That’s the point.
I’m not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it’s a simple way of satisfying some basic axioms without going completely off the rails in situations that don’t require Knuth up-arrow notation to describe.
But that’s all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don’t actually hold makes less sense than it otherwise would, not more.
I believe I suggested earlier that I don’t know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked “At what point is utilitarianism not completely arbitrary?” because I wanted to know more about utilitarianism. That’s all.
Ah. Well, informally, if you’re interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you’re considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn’t too tricky. I’m not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I’m not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.
No, I’m pretty sure it makes you notice. It’s “enough”. “barely enough”, but still “enough”. However, that doesn’t seem to be what’s really important. If I consider you to be correct in your interpretation of the dilemma, in that there are no other side effects, then yes, the 3^^^3 people getting dust in their eyes is a much better choice.
The thought experiment is, 3^^^3 bad events, each just so bad that you notice their badness. Considering consequences of the particular bad thing means that in fact there are other things as well that are depending on your choice, and that’s a different thought experiment.
That is in no way what was said. Also, the idea of an event that somehow manages to have no effect aside from being bad is… insanely contrived. More contrived than the dilemma itself.
However, let’s say that instead of 3^^^3 people getting dust in their eye, 3^^^3 people experience a single nano-second of despair, which is immediately erased from their memory to prevent any psychological damage. If I had a choice between that and torturing a person for 50 years, then I would probably choose the former.
The notion of 3^^^3 events of any sort is far more contrived than the elimination of knock-on effects of an event. There isn’t enough matter in the universe to make that many dust specks, let alone the eyes to be hit and nervous systems to experience it. Of course it’s contrived. It’s a thought experiment. I don’t assert that the original formulation makes it entirely clear; my point is to keep the focus on the actual relevant bit of the experiment—if you wander, you’re answering a less interesting question.
I don’t agree. The existence 3^^^3 people, or 3^^^3 dust specks, is impossible because there isn’t enough matter, as you said. The existence of an event that has only effects that are tailored to fit a particular person’s idea of ‘bad’ does not fit my model of how causality works. That seems like a worse infraction, to me.
However, all of that is irrelevant, because I answered the more “interesting question” in the comment you quoted. To be blunt, why are we still talking about this?
I’m not sure I agree, but “which impossible thing is more impossible” does seem an odd thing to be arguing about, so I’ll not go into the reasons unless someone asks for them.
I meant a more generalized you, in my last sentence. You in particular did indeed answer the more interesting question.
Can you explain a bit about your moral or decision theory that would lead you to conclude that?
Yes. I believe that because any suffering caused by the 3^^^3 dust specks is spread across 3^^^3 people, it is of lesser evil than torturing a man for 50 years. Assuming there to be no side effects to the dust specks.
When I participated in this debate, this post convinced me that a utilitarian must believe that dust specks cause more overall suffering (or whatever badness measure you prefer). Since I already wasn’t a utilitarian, this didn’t bother me.
As a utilitarian (in broad strokes), I agree, and this doesn’t bother me because this example is so far out of the range of what is possible that I don’t object to saying, “yes, somewhere out there torture might be a better choice.” I don’t have to worry about that changing what the answer is around these parts.
That’s not quite what I meant by “explain”—I had understood that to be your position, and was trying to get insight into your reasoning.
Drawing an analogy to mathematics, would you say that this is an axiom, or a theorem?
If an axiom, it clearly must be produced by a schema of some sort (as you clearly don’t have 3^^^3 incompressible rules in your head). Can you explore somewhat the nature of that schema?
If a theorem, what sort of axioms, and how arranged, produce it?
That’s not general enough to mean very much: it fits a number of deontological moral theories and a few utilitarian ones (what the right answer within virtue ethics is is far too dependent on assumptions to mean much), and seems to fit a number of others if you don’t look too closely. Its validity depends greatly on which you’ve picked.
As best I can tell the most common utilitarian objection to TvDS is to deny that Specks are individually of moral significance, which seems to me to miss the point rather badly. Another is to treat various kinds of disutility as incommensurate with each other, which is at least consistent with the spirit of the argument but leads to some rather weird consequences around the edge cases.
No-one asked for a general explanation.
The best term I have found, the one that seems to describe the way I evaluate situations the most accurately, is consequentialism. However, that may still be inaccurate. I don’t have a fully reliable way to determine what consequentialism entails; all I have is Wikipedia, at the moment.
I tend to just use cost-benefit analysis. I also have a mental, and quite arbitrary, scale of what things I do and don’t value, and to what degree, to avoid situations where I am presented with multiple, equally beneficial choices. I also have a few heuristics. One of them essentially says that given a choice between a loss that is spread out amongst many, and an equal loss divided amongst the few, the former is the more moral choice. Does that help?
It helps me understand your reasoning, yes. But if you aren’t arguing within a fairly consistent utilitarian framework, there’s not much point in trying to convince others that the intuitive option is correct in a dilemma designed to illustrate counterintuitive consequences of utilitarianism.
So far it sounds like you’re telling us that Specks is intuitively more reasonable than Torture, because the losses are so small and so widely distributed. Well, yes, it is. That’s the point.
At what point is utilitarianism not completely arbitrary?
I’m not a moral realist. At some point it is completely arbitrary. The meta-ethics here are way outside the scope of this discussion; suffice it to say that I find it attractive as a first approximation of ethical behavior anyway, because it’s a simple way of satisfying some basic axioms without going completely off the rails in situations that don’t require Knuth up-arrow notation to describe.
But that’s all a sideline: if the choice of moral theory is arbitrary, then arguing about the consequences of one you don’t actually hold makes less sense than it otherwise would, not more.
I believe I suggested earlier that I don’t know what moral theory I hold, because I am not sure of the terminology. So I may, in fact, be a utilitarian, and not know it, because I have not the vocabulary to say so. I asked “At what point is utilitarianism not completely arbitrary?” because I wanted to know more about utilitarianism. That’s all.
Ah. Well, informally, if you’re interested in pissing the fewest people off, which as best I can tell is the main point where moral abstractions intersect with physical reality, then it makes sense to evaluate the moral value of actions you’re considering according to the degree to which they piss people off. That loosely corresponds to preference utilitarianism: specifically negative preference utilitarianism, but extending it to the general version isn’t too tricky. I’m not a perfect preference utilitarian either (people are rather bad at knowing what they want; I think there are situations where what they actually want trumps their stated preference; but correspondence with stated preference is itself a preference and I’m not sure exactly where the inflection points lie), but that ought to suffice as an outline of motivations.
Thank you.