Given: a paradoxical (to everybody except some moral philosophers) answer “TORTURE” appears to follow from expected utility maximization.
Possibility 1: the theory is right, everybody is wrong.
But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn’t want to lose them, even if sometimes they give “inefficient” answer from the point of view of simplest greedy utility function.
These biases are probably reflexively consistent—even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn’t make me wish not to have the bias. I value efficiency, but I value my humanity more.
Possibility 2: the theory (expected utility maximization) is wrong.
But the theory is rather nice and elegant, I wouldn’t wish to throw it away. So, maybe there’s another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold—yes, there is.
Possibility 3: the problem is wrong
As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don’t know and will never know about the situation—because their total utility change regarding the whole is either nothing or a single small negative value.
If people were aware of the situation, their utility changes would be different—a large negative value from knowing about the tortured person’s plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all.
Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a choice that would be repudated by everyone, if they only knew? This seems wrong, somehow. Oh! I can simulate them, ask what they would prefer, and give their preference a positive term within my own utility function. I would be the representative of the people in a government, or an AI trying to implement their CEV.
Given: a paradoxical (to everybody except some moral philosophers) answer “TORTURE” appears to follow from expected utility maximization.
Possibility 1: the theory is right, everybody is wrong.
But in the domain of moral philosophy, our preferences should be treated with more respect than elsewhere. We cherish some of our biases. They are what makes us human, we wouldn’t want to lose them, even if sometimes they give “inefficient” answer from the point of view of simplest greedy utility function.
These biases are probably reflexively consistent—even if we knew more, we would still wish to have them. At least, I can hypothesize that they are so, until proven otherwise. Simply showing me the inefficiency doesn’t make me wish not to have the bias. I value efficiency, but I value my humanity more.
Possibility 2: the theory (expected utility maximization) is wrong.
But the theory is rather nice and elegant, I wouldn’t wish to throw it away. So, maybe there’s another way to fix the paradox? Maybe, something wrong with the problem definition? And lo and behold—yes, there is.
Possibility 3: the problem is wrong
As the problem is stated, the preferences of 3^^^3 people are not taken into account. It is assumed that the people don’t know and will never know about the situation—because their total utility change regarding the whole is either nothing or a single small negative value.
If people were aware of the situation, their utility changes would be different—a large negative value from knowing about the tortured person’s plight and being forcibly forbidden to help, or a positive value from knowing they helped. Well, there would also be a negative value from moral philosophers who would know and worry about inefficiency, but I think it would be a relatively small value, after all.
Unfortunately, in the context of the problem, the people are unaware. The choice for the whole humanity is given to me alone. What should I do? Should I play dictator and make a choice that would be repudated by everyone, if they only knew? This seems wrong, somehow. Oh! I can simulate them, ask what they would prefer, and give their preference a positive term within my own utility function. I would be the representative of the people in a government, or an AI trying to implement their CEV.
Result: SPECKS!! Hurray! :)
OK. I think I understand you now. Thanks for clarifying.