I think a better way to frame this issue would be the following method.
Present your philosophical thought-experiment.
Ask your subject for their response and their justification.
Ask your subject, what would need to change for them to change their belief?
For example, if I respond to your question of the solitary traveler with “You shouldn’t do it because of biological concerns.” Accept the answer and then ask, what would need to change in this situation for you to accept the killing of the traveler as moral?
I remember this method giving me deeper insight into the Happiness Box experiment.
Here is how the process works:
There is a happiness box. Once you enter it, you will be completely happy through living in a virtual world. You will never leave the box. Would you enter it?
Initial response. Yes, I would enter the box. Since my world is only made up of my perceptions of reality, there is no difference between the happiness box and the real world. Since I will be happier in the happiness box, I would enter.
Reframing question. What would need to change so you would not enter the box.
My response: Well, if I had children or people depending on me, I could not enter.
Surprising conclusion! Aha! Then you do believe that there is a difference between a happiness box and the real world, namely your acceptance of the existence of other minds and the obligations those minds place on you.
That distinction was important to me, not only intellectually but in how I approached my life.
I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, “What would I have to prove in order for you to change your mind?” If they answer “nothing” you know they are probably not truth-seekers.
Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral position is really about. There are many factors to every decision, so it might help to try varying each of them, and finding other conditions that compensate for the variation.
For example, you wouldn’t enter the happiness box if you suspected that information about it giving the true happiness is flawed, that it’s some kind of lie or misunderstanding (on anyone’s part), of which the situation of leaving your family on the outside is a special case, and here is a new piece of information. Would you like your copy to enter the happiness box if you left behind your original self? Would you like a new child to be born within the happiness box? And so on.
This seems to nicely fix something which I felt was wrong in the “least convenient possible world” heuristic. The LCPW only serves to make us consider a possibility seriously. It may be too easy to come up with a LCPW. Asking what would change your mind helps us examine the decision boundary.
The happiness box is an interesting speculation, but it involves an assumption that, in my view, undermines it: “you will be completely happy.”
This is assuming that happiness has a maximum, and the best you can do is top up to that maximum. If that were true, then the happiness box might indeed be the peak of existence. But is it true?
Okay, well let’s apply exactly the technique discussed above:
If the hypothetical Omega tells you that they’re is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?
Note: I’m asking that in order to give another example of the technique in action. But still feel free to give a real answer if you’d choose to.
Side you didn’t answer the question one way or another, I can’t apply the second technique here. I can’t ask what would have to change in order for you to change your answer.
What if we ignore the VR question? Omega tells you that killing and eating your children will make you maximally happy. Should you do it?
Omega can’t tell you that doing X makes you maximally happy unless doing X actually makes you maximally happy. And a scenario where doing X actually makes you maximally happy may be a scenario where you are no longer human and don’t have human preferences.
Omega could, of course, also say “you are mistaken when you conclude that being maximally happy in this scenario is not a human preference”. However,
This conclusion that that is not a human preference is being made by you, the reader, not just by the person in the scenario. It is not possible to stipulate that you, the reader, are wrong about your analysis of some scenario.
Even within the scenario, if someone is mistaken about something like this, it’s a scenario where he can’t trust his own reasoning abilities, so there’s really nothing he can conclude about anything at all. (What if Omega tells you that you don’t understand logic and that every use of logic you think you have done was either wrong or true only by coincidence?)
If the hypothetical Omega tells you that they’re is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?
This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I’m rather skeptical of the idea.
For my part, it’s difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.
I think a better way to frame this issue would be the following method.
Present your philosophical thought-experiment.
Ask your subject for their response and their justification.
Ask your subject, what would need to change for them to change their belief?
For example, if I respond to your question of the solitary traveler with “You shouldn’t do it because of biological concerns.” Accept the answer and then ask, what would need to change in this situation for you to accept the killing of the traveler as moral?
I remember this method giving me deeper insight into the Happiness Box experiment.
Here is how the process works:
There is a happiness box. Once you enter it, you will be completely happy through living in a virtual world. You will never leave the box. Would you enter it?
Initial response. Yes, I would enter the box. Since my world is only made up of my perceptions of reality, there is no difference between the happiness box and the real world. Since I will be happier in the happiness box, I would enter.
Reframing question. What would need to change so you would not enter the box.
My response: Well, if I had children or people depending on me, I could not enter.
Surprising conclusion! Aha! Then you do believe that there is a difference between a happiness box and the real world, namely your acceptance of the existence of other minds and the obligations those minds place on you.
That distinction was important to me, not only intellectually but in how I approached my life.
Hope this contributes to the conversation.
David
I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, “What would I have to prove in order for you to change your mind?” If they answer “nothing” you know they are probably not truth-seekers.
Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral position is really about. There are many factors to every decision, so it might help to try varying each of them, and finding other conditions that compensate for the variation.
For example, you wouldn’t enter the happiness box if you suspected that information about it giving the true happiness is flawed, that it’s some kind of lie or misunderstanding (on anyone’s part), of which the situation of leaving your family on the outside is a special case, and here is a new piece of information. Would you like your copy to enter the happiness box if you left behind your original self? Would you like a new child to be born within the happiness box? And so on.
This seems to nicely fix something which I felt was wrong in the “least convenient possible world” heuristic. The LCPW only serves to make us consider a possibility seriously. It may be too easy to come up with a LCPW. Asking what would change your mind helps us examine the decision boundary.
Great, David! I love it.
The happiness box is an interesting speculation, but it involves an assumption that, in my view, undermines it: “you will be completely happy.”
This is assuming that happiness has a maximum, and the best you can do is top up to that maximum. If that were true, then the happiness box might indeed be the peak of existence. But is it true?
Okay, well let’s apply exactly the technique discussed above:
If the hypothetical Omega tells you that they’re is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?
Note: I’m asking that in order to give another example of the technique in action. But still feel free to give a real answer if you’d choose to.
Side you didn’t answer the question one way or another, I can’t apply the second technique here. I can’t ask what would have to change in order for you to change your answer.
What if we ignore the VR question? Omega tells you that killing and eating your children will make you maximally happy. Should you do it?
Omega can’t tell you that doing X makes you maximally happy unless doing X actually makes you maximally happy. And a scenario where doing X actually makes you maximally happy may be a scenario where you are no longer human and don’t have human preferences.
Omega could, of course, also say “you are mistaken when you conclude that being maximally happy in this scenario is not a human preference”. However,
This conclusion that that is not a human preference is being made by you, the reader, not just by the person in the scenario. It is not possible to stipulate that you, the reader, are wrong about your analysis of some scenario.
Even within the scenario, if someone is mistaken about something like this, it’s a scenario where he can’t trust his own reasoning abilities, so there’s really nothing he can conclude about anything at all. (What if Omega tells you that you don’t understand logic and that every use of logic you think you have done was either wrong or true only by coincidence?)
This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I’m rather skeptical of the idea.
For my part, it’s difficult for me to imagine a set of observations I could make that would provide sufficient evidence to justify belief in many of the kinds of statements that get tossed around in these sorts of discussions. I generally just assume Omega adjusts my priors directly.