It wouldn’t follows that it is a good idea, or efficient idea. But it would follow that it is the preferred idea, as calculated by my utility function that has non-zero terms for preferences of other people.
Fortunately, my simulation of other people doesn’t suddenly wish to help an arbitrary person by donating a dollar with 99% transaction cost.
Hm. As with Maelin’s comment above, I seem to agree with every part of this comment, but I don’t understand where it’s going. Perhaps I missed your original point altogether.
My point was that the “SPECKS!!” answer to the original problem, which is intuitively obvious to (I think) most people here, is not necessarily wrong. It can directly follow from expected utility maximization, if the utility function values the choice of people, even if the choice is “economically” suboptimal.
A substantial part of talking about utility functions is to assert we are trying to maximize something about utility (total, average, or whatnot). It seems very strange to say that we can maximize utility by being inefficient in our conversion of other resources into utility. If your goal is to avoid certain “efficient” conversations for other reasons, then it doesn’t make a lot of sense to say that you are really trying to implement a utility function.
In other words, Walzer’s Spheres of Justice concept, which states that some trade-offs are morally impermissible, is not really implementable in a utility function. To the extent that he (or I) might be modeled by a utility function, there are inevitably going to be errors in what the function predicts I would want or very strange discontinuities in the function.
But I am trying to maximize the total utility, just a different one.
Ok, let me put it this way. I will drop the terms for other people’s preferences from my utility function. It is now entirely self-centered. But it still values the good feeling I get if I’m allowed to participate in saving someone from fifty years of torture. The value of this feeling if much more than the miniscule negative utility of a dust speck. Now, assume some reasonable percent of the 3^^^3 people are like me in this respect.
Maximizing the total utility for everybody results in: SPECKS!!
Now an objection can be stated that by the conditions of the problem, I cannot change the utilities of the 3^^^3 people. They are given and are equal to a miniscule negative value corresponding to the small speck of dust. Evil forces give me the sadistic choice and don’t allow me to share the good news with everyone. Ok. But I can still imagine what the people would have preferred if given a choice. So I add a term for their preference to my utility function. I’m behaving like a representative of people in a government. Or like a Friendly AI trying to implement their CEV.
In other words, Walzer’s Spheres of Justice concept, which states that some trade-offs are morally impermissible, is not really implementable in a utility function.
My arguments have nothing to do with Walzer’s Spheres of Justice concept, AFAICT.
Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!
The point of picking a number the size of 3^^^3 is that it is so large that this statement is false. Even if 99% are like you, I can keep adding ^ and falsify the statement. If utility is additive at all, torture is the better choice.
My reference to Walzer was simply to note that many interesting moral theories exist that do not accept that utility is additive. I don’t accept that utility is additive.
Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!
The point of picking a number the size of 3^^^3 is that it is so large that this statement is false.
Why would it ever be false, no matter how large the number?
Let S = negated disutility of speck, a small positive number.
Let F = utility of good feeling of protecting someone from torture.
Let P = the fraction of people who are like me (for whom F is positive), 0 < P ⇐ 1.
Then the total utility for N people, no matter what N, is N(PF—S), which is >0 as long as P*F > S.
I don’t accept that utility is additive.
Well, we can agree that utility is complicated. I think it’s possible to keep it additive by shifting complexities to the details of its calculation.
This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don’t seem to be realizing that you are fighting the hypo.
This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don’t seem to be realizing that you are fighting the hypo.
I don’t realize it either; I’m not sure that it’s true. Forgive me if I’m missing something obvious, but:
gRR wants to include the preferences of the people getting dust-specked in his utility function.
But as you point out, he can’t; the hypothetical doesn’t allow it.
So instead, he includes his extrapolation of what their preferences would be if they were informed, and attempts to act on their behalf.
You can argue that that’s a silly way to construct a utility function (you seem to be heading that way in your third paragraph), but that’s a different objection.
If you want to answer a question that isn’t asked by the hypothetical, you are fighting the hypo. That’s basically the paradigmatic example of “fighting the hypo.”
I think gRR has the right answer to the question he is asking. But it is a different one that Eliezer was asking, and teaches different lessons. To the extent that gRR thinks he has rebutted the lessons from Eliezer’s question, he’s incorrect.
I’m not sure why do you think I’m asking a different question. Do you mean to say that in the original Eliezer’s problem all of the utilities are fixed, including mine? But then, the question appears entirely without content:
“Here are two numbers, this one is bigger than that one, your task is to always choose the biggest number. Now which number do you choose?”
Besides, if this is indeed what Eliezer meant, then his choice of “torture” for one of the numbers is inconsistent. Torture always has utility implications for other people, not just the person being tortured. I hypothesize that this is what makes it different (non-additive, non-commeasurable, etc) for some moral philosophers.
As fubarobfusco pointed out, your argument includes the implication that discovering or publicizing unpleasant truths can be morally wrong (because the participants were ignorant in the original formulation). It’s not obvious to me that any moral theory is committed to that position.
And without that moral conclusion, I think Eliezer is correct that a total utilitarian is committed to believing that choosing TORTURE over SPECKS maximizes total utility. The repugnant conclusion really is that repugnant. All of that was not an obvious result to me.
Any utility function that does not give an explicit overwhelmingly positive value to truth, and does give an explicit positive value to “pleasure” would obviously include the implication that discovering or publicizing unpleasant truths can be morally wrong. I don’t see why it is relevant.
If all the utilities are specified by the problem text completely, then TORTURE maximizes the total utility by definition. There’s nothing to be committed about. But in this case, “torture” is just a label. It cannot refer to a real torture, because a real torture would produce different utility changes for people.
It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
That doesn’t seem unreasonable. THat knowledge is probably worse than the speck.
That’s a really good point. Does the “repugnant conclusion” problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?
I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn’t have any relevant choice to deprive him of.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
Not that I can tell.
I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else’s 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?
Well, OK, sure, but… can’t anything follow from expected utility maximization, the way you’re approaching it? For all (X, Y), if someone chooses X over Y, that can directly follow from expected utility maximization, if the utility function values X more than Y.
If that means the choice of X over Y is not necessarily wrong, OK, but it seems therefore to follow that no choice is necessarily wrong.
Given: a paradoxical (to everybody except some moral philosophers) answer “TORTURE” appears to follow from expected utility maximization.
Possibility 1: the theory is right, everybody is wrong.
But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn’t want to lose them, even if sometimes they give “inefficient” answer from the point of view of simplest greedy utility function.
These biases are probably reflexively consistent—even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn’t make me wish not to have the bias. I value efficiency, but I value my humanity more.
Possibility 2: the theory (expected utility maximization) is wrong.
But the theory is rather nice and elegant, I wouldn’t wish to throw it away. So, maybe there’s another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold—yes, there is.
Possibility 3: the problem is wrong
As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don’t know and will never know about the situation—because their total utility change regarding the whole is either nothing or a single small negative value.
If people were aware of the situation, their utility changes would be different—a large negative value from knowing about the tortured person’s plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all.
Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a choice that would be repudated by everyone, if they only knew? This seems wrong, somehow. Oh! I can simulate them, ask what they would prefer, and give their preference a positive term within my own utility function. I would be the representative of the people in a government, or an AI trying to implement their CEV.
It wouldn’t follows that it is a good idea, or efficient idea. But it would follow that it is the preferred idea, as calculated by my utility function that has non-zero terms for preferences of other people.
Fortunately, my simulation of other people doesn’t suddenly wish to help an arbitrary person by donating a dollar with 99% transaction cost.
Hm. As with Maelin’s comment above, I seem to agree with every part of this comment, but I don’t understand where it’s going. Perhaps I missed your original point altogether.
My point was that the “SPECKS!!” answer to the original problem, which is intuitively obvious to (I think) most people here, is not necessarily wrong. It can directly follow from expected utility maximization, if the utility function values the choice of people, even if the choice is “economically” suboptimal.
A substantial part of talking about utility functions is to assert we are trying to maximize something about utility (total, average, or whatnot). It seems very strange to say that we can maximize utility by being inefficient in our conversion of other resources into utility. If your goal is to avoid certain “efficient” conversations for other reasons, then it doesn’t make a lot of sense to say that you are really trying to implement a utility function.
In other words, Walzer’s Spheres of Justice concept, which states that some trade-offs are morally impermissible, is not really implementable in a utility function. To the extent that he (or I) might be modeled by a utility function, there are inevitably going to be errors in what the function predicts I would want or very strange discontinuities in the function.
But I am trying to maximize the total utility, just a different one.
Ok, let me put it this way. I will drop the terms for other people’s preferences from my utility function. It is now entirely self-centered. But it still values the good feeling I get if I’m allowed to participate in saving someone from fifty years of torture. The value of this feeling if much more than the miniscule negative utility of a dust speck. Now, assume some reasonable percent of the 3^^^3 people are like me in this respect. Maximizing the total utility for everybody results in: SPECKS!!
Now an objection can be stated that by the conditions of the problem, I cannot change the utilities of the 3^^^3 people. They are given and are equal to a miniscule negative value corresponding to the small speck of dust. Evil forces give me the sadistic choice and don’t allow me to share the good news with everyone. Ok. But I can still imagine what the people would have preferred if given a choice. So I add a term for their preference to my utility function. I’m behaving like a representative of people in a government. Or like a Friendly AI trying to implement their CEV.
My arguments have nothing to do with Walzer’s Spheres of Justice concept, AFAICT.
The point of picking a number the size of 3^^^3 is that it is so large that this statement is false. Even if 99% are like you, I can keep adding ^ and falsify the statement. If utility is additive at all, torture is the better choice.
My reference to Walzer was simply to note that many interesting moral theories exist that do not accept that utility is additive. I don’t accept that utility is additive.
Why would it ever be false, no matter how large the number?
Let S = negated disutility of speck, a small positive number. Let F = utility of good feeling of protecting someone from torture. Let P = the fraction of people who are like me (for whom F is positive), 0 < P ⇐ 1. Then the total utility for N people, no matter what N, is N(PF—S), which is >0 as long as P*F > S.
Well, we can agree that utility is complicated. I think it’s possible to keep it additive by shifting complexities to the details of its calculation.
This knowledge among the participants is adding to the thought experiment. The original question:
You are asking:
Notice how your formulation has 3^^^3 in both options, while the original question does not.
Yes, I stated and answered this exact objection two comments ago.
I have come to believe that—like a metaphorical Groundhog Day—every conversation on this topic is the same lines from the same play, with different actors.
This is the part of the play where I repeat more forcefully that you are fighting the hypo, but don’t seem to be realizing that you are fighting the hypo.
In the end, the lesson of the problem is not about the badness of torture or what things count as positive utility, but about learning what commitments you make with various assertions about the way moral decisions should be made.
I don’t realize it either; I’m not sure that it’s true. Forgive me if I’m missing something obvious, but:
gRR wants to include the preferences of the people getting dust-specked in his utility function.
But as you point out, he can’t; the hypothetical doesn’t allow it.
So instead, he includes his extrapolation of what their preferences would be if they were informed, and attempts to act on their behalf.
You can argue that that’s a silly way to construct a utility function (you seem to be heading that way in your third paragraph), but that’s a different objection.
If you want to answer a question that isn’t asked by the hypothetical, you are fighting the hypo. That’s basically the paradigmatic example of “fighting the hypo.”
I think gRR has the right answer to the question he is asking. But it is a different one that Eliezer was asking, and teaches different lessons. To the extent that gRR thinks he has rebutted the lessons from Eliezer’s question, he’s incorrect.
I’m not sure why do you think I’m asking a different question. Do you mean to say that in the original Eliezer’s problem all of the utilities are fixed, including mine? But then, the question appears entirely without content:
“Here are two numbers, this one is bigger than that one, your task is to always choose the biggest number. Now which number do you choose?”
Besides, if this is indeed what Eliezer meant, then his choice of “torture” for one of the numbers is inconsistent. Torture always has utility implications for other people, not just the person being tortured. I hypothesize that this is what makes it different (non-additive, non-commeasurable, etc) for some moral philosophers.
As fubarobfusco pointed out, your argument includes the implication that discovering or publicizing unpleasant truths can be morally wrong (because the participants were ignorant in the original formulation). It’s not obvious to me that any moral theory is committed to that position.
And without that moral conclusion, I think Eliezer is correct that a total utilitarian is committed to believing that choosing TORTURE over SPECKS maximizes total utility. The repugnant conclusion really is that repugnant. All of that was not an obvious result to me.
Any utility function that does not give an explicit overwhelmingly positive value to truth, and does give an explicit positive value to “pleasure” would obviously include the implication that discovering or publicizing unpleasant truths can be morally wrong. I don’t see why it is relevant.
If all the utilities are specified by the problem text completely, then TORTURE maximizes the total utility by definition. There’s nothing to be committed about. But in this case, “torture” is just a label. It cannot refer to a real torture, because a real torture would produce different utility changes for people.
It sounds to me as if you’re asserting that the ignorance of the 3^^^3 people to the fact that their specklessness depends on torture, makes a positive moral difference in the matter.
That doesn’t seem unreasonable. THat knowledge is probably worse than the speck.
Sure, it does have the odd implication that discovering or publicizing unpleasant truths can be morally wrong, though.
That’s a really good point. Does the “repugnant conclusion” problem for total utilitarians imply that they think informing others of bad news can be morally wrong in ordinary circumstances? Or just the product of a poor definition of utility?
I take it as fairly uncontroversial that a benevolent lie when no changes in decision by the listener are possible is morally acceptable. That is, falsely saying “Your son survived the plane crash” to the father who is literally moments from dying seems morally acceptable because the father isn’t going to decide anything differently based on that statement. But that’s an unusual circumstance, so I don’t think it should trouble us.
Those of us who think torture is worse (i.e. are not total utilitarians) probably are not committed to any position on the revealing-unpleasant-truths-conundrum. Right?
Agreed. Lying to others to manipulate them deprives them of the ability to make their own choices — which is part of complex human values — but in this case the father doesn’t have any relevant choice to deprive him of.
Not that I can tell.
I suppose another way of looking at this is a collective-action or extrapolated-volition problem. Each individual in the SPECKS case might prefer a momentary dust speck over the knowledge that their momentary comfort implied someone else’s 50 years of torture. However, a consequentialist agent choosing TORTURE over SPECKS is doing so in the belief that SPECKS is actually worse. Can that agent be implementing the extrapolated volition of the individuals?
Well, OK, sure, but… can’t anything follow from expected utility maximization, the way you’re approaching it? For all (X, Y), if someone chooses X over Y, that can directly follow from expected utility maximization, if the utility function values X more than Y.
If that means the choice of X over Y is not necessarily wrong, OK, but it seems therefore to follow that no choice is necessarily wrong.
I suspect I’m still missing your point.
Given: a paradoxical (to everybody except some moral philosophers) answer “TORTURE” appears to follow from expected utility maximization.
Possibility 1: the theory is right, everybody is wrong.
But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn’t want to lose them, even if sometimes they give “inefficient” answer from the point of view of simplest greedy utility function.
These biases are probably reflexively consistent—even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn’t make me wish not to have the bias. I value efficiency, but I value my humanity more.
Possibility 2: the theory (expected utility maximization) is wrong.
But the theory is rather nice and elegant, I wouldn’t wish to throw it away. So, maybe there’s another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold—yes, there is.
Possibility 3: the problem is wrong
As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don’t know and will never know about the situation—because their total utility change regarding the whole is either nothing or a single small negative value.
If people were aware of the situation, their utility changes would be different—a large negative value from knowing about the tortured person’s plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all.
Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a choice that would be repudated by everyone, if they only knew? This seems wrong, somehow. Oh! I can simulate them, ask what they would prefer, and give their preference a positive term within my own utility function. I would be the representative of the people in a government, or an AI trying to implement their CEV.
Result: SPECKS!! Hurray! :)
OK. I think I understand you now. Thanks for clarifying.