Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!
The point of picking a number the size of 3^^^3 is that it is so large that this statement is false.
Why would it ever be false, no matter how large the number?
Let S = negated disutility of speck, a small positive number.
Let F = utility of good feeling of protecting someone from torture.
Let P = the fraction of people who are like me (for whom F is positive), 0 < P ⇐ 1.
Then the total utility for N people, no matter what N, is N(PF—S), which is >0 as long as P*F > S.
I don’t accept that utility is additive.
Well, we can agree that utility is complicated. I think it’s possible to keep it additive by shifting complexities to the details of its calculation.
This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don’t seem to be realizing that you are fighting the hypo.
This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don’t seem to be realizing that you are fighting the hypo.
I don’t realize it either; I’m not sure that it’s true. Forgive me if I’m missing something obvious, but:
gRR wants to include the preferences of the people getting dust-specked in his utility function.
But as you point out, he can’t; the hypothetical doesn’t allow it.
So instead, he includes his extrapolation of what their preferences would be if they were informed, and attempts to act on their behalf.
You can argue that that’s a silly way to construct a utility function (you seem to be heading that way in your third paragraph), but that’s a different objection.
If you want to answer a question that isn’t asked by the hypothetical, you are fighting the hypo. That’s basically the paradigmatic example of “fighting the hypo.”
I think gRR has the right answer to the question he is asking. But it is a different one that Eliezer was asking, and teaches different lessons. To the extent that gRR thinks he has rebutted the lessons from Eliezer’s question, he’s incorrect.
I’m not sure why do you think I’m asking a different question. Do you mean to say that in the original Eliezer’s problem all of the utilities are fixed, including mine? But then, the question appears entirely without content:
“Here are two numbers, this one is bigger than that one, your task is to always choose the biggest number. Now which number do you choose?”
Besides, if this is indeed what Eliezer meant, then his choice of “torture” for one of the numbers is inconsistent. Torture always has utility implications for other people, not just the person being tortured. I hypothesize that this is what makes it different (non-additive, non-commeasurable, etc) for some moral philosophers.
As fubarobfusco pointed out, your argument includes the implication that discovering or publicizing unpleasant truths can be morally wrong (because the participants were ignorant in the original formulation). It’s not obvious to me that any moral theory is committed to that position.
And without that moral conclusion, I think Eliezer is correct that a total utilitarian is committed to believing that choosing TORTURE over SPECKS maximizes total utility. The repugnant conclusion really is that repugnant. All of that was not an obvious result to me.
Any utility function that does not give an explicit overwhelmingly positive value to truth, and does give an explicit positive value to “pleasure” would obviously include the implication that discovering or publicizing unpleasant truths can be morally wrong. I don’t see why it is relevant.
If all the utilities are specified by the problem text completely, then TORTURE maximizes the total utility by definition. There’s nothing to be committed about. But in this case, “torture” is just a label. It cannot refer to a real torture, because a real torture would produce different utility changes for people.
It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
That doesn’t seem unreasonable. THat knowledge is probably worse than the speck.
That’s a really good point. Does the “repugnant conclusion” problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?
I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn’t have any relevant choice to deprive him of.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
Not that I can tell.
I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else’s 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?
Why would it ever be false, no matter how large the number?
Let S = negated disutility of speck, a small positive number. Let F = utility of good feeling of protecting someone from torture. Let P = the fraction of people who are like me (for whom F is positive), 0 < P ⇐ 1. Then the total utility for N people, no matter what N, is N(PF—S), which is >0 as long as P*F > S.
Well, we can agree that utility is complicated. I think it’s possible to keep it additive by shifting complexities to the details of its calculation.
This knowledge among the participants is adding to the thought experiment. The original question:
You are asking:
Notice how your formulation has 3^^^3 in both options, while the original question does not.
Yes, I stated and answered this exact objection two comments ago.
I have come to believe that—like a metaphorical Groundhog Day—every conversation on this topic is the same lines from the same play, with different actors.
This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don’t seem to be realizing that you are fighting the hypo.
In the end, the lesson of the problem is not about the badness of torture or what things count as positive utility, but about learning what commitments you make with various assertions about the way moral decisions should be made.
I don’t realize it either; I’m not sure that it’s true. Forgive me if I’m missing something obvious, but:
gRR wants to include the preferences of the people getting dust-specked in his utility function.
But as you point out, he can’t; the hypothetical doesn’t allow it.
So instead, he includes his extrapolation of what their preferences would be if they were informed, and attempts to act on their behalf.
You can argue that that’s a silly way to construct a utility function (you seem to be heading that way in your third paragraph), but that’s a different objection.
If you want to answer a question that isn’t asked by the hypothetical, you are fighting the hypo. That’s basically the paradigmatic example of “fighting the hypo.”
I think gRR has the right answer to the question he is asking. But it is a different one that Eliezer was asking, and teaches different lessons. To the extent that gRR thinks he has rebutted the lessons from Eliezer’s question, he’s incorrect.
I’m not sure why do you think I’m asking a different question. Do you mean to say that in the original Eliezer’s problem all of the utilities are fixed, including mine? But then, the question appears entirely without content:
“Here are two numbers, this one is bigger than that one, your task is to always choose the biggest number. Now which number do you choose?”
Besides, if this is indeed what Eliezer meant, then his choice of “torture” for one of the numbers is inconsistent. Torture always has utility implications for other people, not just the person being tortured. I hypothesize that this is what makes it different (non-additive, non-commeasurable, etc) for some moral philosophers.
As fubarobfusco pointed out, your argument includes the implication that discovering or publicizing unpleasant truths can be morally wrong (because the participants were ignorant in the original formulation). It’s not obvious to me that any moral theory is committed to that position.
And without that moral conclusion, I think Eliezer is correct that a total utilitarian is committed to believing that choosing TORTURE over SPECKS maximizes total utility. The repugnant conclusion really is that repugnant. All of that was not an obvious result to me.
Any utility function that does not give an explicit overwhelmingly positive value to truth, and does give an explicit positive value to “pleasure” would obviously include the implication that discovering or publicizing unpleasant truths can be morally wrong. I don’t see why it is relevant.
If all the utilities are specified by the problem text completely, then TORTURE maximizes the total utility by definition. There’s nothing to be committed about. But in this case, “torture” is just a label. It cannot refer to a real torture, because a real torture would produce different utility changes for people.
It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
That doesn’t seem unreasonable. THat knowledge is probably worse than the speck.
Sure, it does have the odd implication that discovering or publicizing unpleasant truths can be morally wrong, though.
That’s a really good point. Does the “repugnant conclusion” problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?
I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn’t have any relevant choice to deprive him of.
Not that I can tell.
I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else’s 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?