As stated, the question comes down to acting on an opinion you have on an unknown, but within the principles of this problem potentially knowable conclusion about your own utility function. And that is:
Which is larger:
1) the amount of positive utility you gain from knowing the most disutile truths that exist for you
OR
2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
ALMOST by definition of the word utility, you would choose the truth (white box) if and only if 1) is larger and you would choose the falsehood (black box) only if 2) is larger. I say almost by definition because all answers of the form “I would choose the truth even if it was worse for me” are really statements that the utility you place on the truth is higher than Omega has assumed, which violates the assumption that Omega knows your utility function and speaks truthfully about it.
I say ALMOST by definition because we have to consider the other piece of the puzzle: when I open box 2) there is a machine that “will reprogram your mind.” Does this change anything? Well it depends on which utility function Omega is using to make her calculations of my long term utility. Is Omega using my utility function BEFORE the machine reprogram’s my mind, or after? Is me after the reprogramming really still me after the reprogramming? I think within the spirit of the problem we must assume that 1) The utility happens to be maximized for both me before the reprogram and me after the reprogram, perhaps my utility function does not change at all in the reprogramming, 2) Omega has correctly included the amount of disutility I would have to the particular programming change, and this is factored in to her calculations so that the proposed falsehood and mind reprogramming do in fact, on net, give the maximum utility I can get from knowing the falsehood PLUS being reprogrammed.
Within these constraints, we find that the “ALMOST” above can be removed if we include the (dis)utility I have for the reprogramming in the calculation. So:
Which is larger:
1) the amount of positive utility you gain from knowing the most disutile truths that exist for you AND being reprogrammed to believe it
OR
2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
So ultimately, the question which would we choose is the question above. I think to say anything else is to say “my utility is not my utility,” i.e. to contradict yourself.
In my case, I would choose the white box. On reflection, considering the long run, I doubt that there is a falsehood PLUS a reprogramming that I would accept as a combination as more utile than the worst true fact (with no reprogramming to consider) that I would ever expect to get. Certainly, this is the Occam’s razor answer, the ceteris paribus answer. GENERALLY, we believe that knowing more is better for us than being wrong. Generally we believe that someone else meddling with our minds has a high disutility to us.
For completeness I think these are straightforward conclusions from “playing fair” in this question, from accepting an Omega as postulated.
1) If Omega assures you the utility for 2) (including the disutility of the reprogramming as experienced by your pre-reprogrammed self) is 1% higher than the utility of 1), then you want to choose 2), to choose the falsehood and the reprogramming. To give any other answer is to presume that Omega is wrong about your utility , which violates the assumptions of the question.
2) If Omega assures you the utility for 2) and 1) are equal, it doesn’t matter which one you choose. As much as you might think “all other things being equal I’ll choose the truth” you must accept that the value you place on the truth has already been factored in, and the blip-up from choosing the truth will be balanced by some other disutility in a non-truth area. Since you can be pretty sure that the utility you place on the truth is very much unrelated to pain and pleasure and joy and love and so on, you are virtually guaranteeing you will FEEL worse choosing the truth, but that this worse feeling will just barely be almost worth it.
Finally, I tried to play nice within the question. But it is entirely possible, and I would say likely, that there can never be an Omega who could know ahead of time with the kind of detail required, what your future utility would be, at least not in our Universe. Consider just the quantum uncertainties (or future Everett universe splits). It seems most likely that your future net utility covers a broad range of outcomes in different Everett branches. In that case, it seems very likely that there is no one truth that minimizes your utility in all your possible futures, and no one falsehood that maximizes it in all your possible futures. In this case we would have a distribution of utility outcomes from 1) and 2) and it is not clear that we know how to choose between two different distributions. Possibly utility is defined in such a way that it would be the expectation value that “truly” mattered to us, but that puts I think a very serious constraint on utility functions and how we interact with them that I am not sure could be supported.
Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it’s information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don’t know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would be something of disutility to you as you are at present, and not something that you would accept if you were aware of it. Accordingly, it it a safe assumption that Omega has based its calculations off your utility before accepting the information, and for the purposes of this problem, that is exactly the case. This is your case (2); if a falsehood intrinsically conflicts with your utility function in whatever way, it generates disutility (and thus, is probably suboptimal). If your utility function is inherently hostile to such changes, this presents a limitation on the factset Omega can impose upon you.
That said, your personal answer seems to place rather conservative bounds on the nature of what Omega can do to you. Omega has not presented bounds on it’s utilities; instead, it has advised you that they are maximized within fairly broad terms. Simialrly, it has not assured you anything about the relative values of those utilities, but the structure of the problem as Omega presents it (which you know is correct, because Omega has already arbitrarily demonstrated it’s power and trustworthiness) means you are dealing with an outcome pump attached directly to your utility function. Since the structure of the problem gives it a great deal of room in which to operate, the only real limitation is the nature of your own utility function. Sure, it’s entirely possible that your utility function could be laid out in such a way as to strongly emphasize the disutility of misinformation… but that just limits the nice things Omega can do for you, it does nothing to save you from the bad things it can do to you. It remains valid to show you a picture and say ‘the picture you are looking at is a basilisk; it causes any human that sees it to die within 48 hours’. Even without assuming basilisks, you’re still dealing with a hostile outcome pump. There’s bound to be some truth that you haven’t considered that will lead you to a bad end. And if you want to examine it in terms of Everett branches, Omega is arbitrarily powerful. It has the power to compute all possible universes and give you the information which has maximally bad consequences for your utility function in aggregate across all possible universes (this implies, of course, that Omega is outside the Matrix, but pretty much any problem invoking Omega does that).
Even so, Omega doesn’t assure you of anything regarding the specific weights of the two pieces of information. Utility functions differ, and since there’s nothing Omega could say that would be valid for all utility functions, there’s nothing it will say at all. It’s left to you to decide which you’d prefer.
That said, I do find it interesting to note under which lines of reasoning people will choose something labelled ‘maximum disutility’. I had thought it to be a more obvious problem than that.
Wire-heading, drug-addicition, lobotomy, black-box, all seem similar morally to me. Heck, my own personal black box would need nothing more than to have me believe that the universe is just a little more absurd than I already believe, that the laws of physics and the progress of humanity are a fever-dream, an hallucination. From there I would lower my resistance to wire-heading, drug-addiction. Even if I still craved the “truth” (my utility function was largely unchanged), these new facts would lead me to believe there was less of a possibility of utility from pursuing that, and so the rather obvious utility of drug or electronic induced pleasure would win my not-quite-factual day.
The white box and a Nazi colonel-dentist with his tools laid out, talking to me about what he was going to do to me until I chose the black box are morally similar. I do not know why the Nazis/Omega want me to black box it. I do not know the extent of the disutility the colonel-dentist will actually inflict upon me. I do know m fear is at minimum nearly overwhelming, and may indeed overwhelm me before the day is done.
Being broken in the sense that those who torture you for a result, and choosing the black box, are morally equivalent to me. Abandoning a long-term principle of commitment to the truth in favor of a short term but very high utility of giving up, the short term utility of totally abandoning myself in to the control of an evil god to avoid his torture is what I am being asked to do in choosing the black box.
Its ALWAYS at least a little scary to choose reality over self-deception, over the euphoria of drugs and pain killers. The utility one derives from making this choice is much colder than the utility one derives from succumbing: it comes more, it seems, from the neo-cortex and less from the limbic system or lizard brain of fast fear responses.
My utility AFTER I choose the white box may well be less than if I chose the black box. The scary thing in the white box might be that bad. But my life up to now has rewarded me vastly for resisting drug addiction, for resisting gorping my own brain in the pursuit of non-reality-based pleasure. Indeed, it has rewarded me for resisting fear.
So before I have made my choice, I do not want to choose the lie in order to get the dopamine, or the epinephrine or whatever it is that the wire gives me. That is LOW utility to me before I make the choice. Resisting choosing that out of fear has high utility to me.
WIll I regret my choice afterwards? Maybe, since I might be a broken destroyed shell of a human subject to brain patterns for which I had no evolutionary preparation.
Would I admire someone who chose the black box? No. Would I admire someone who had chosen the white box? Yes. Doing things that I would admire in others is a strong source of utility in me (and in many others of course).
Do you think your omega problem contains elements that go beyond the question: would you abandon your principled commitment to truth and choose believing a lie and wire-heading under the threat of an unknown future torture inflicted upon you by a powerful entity you cannot and do not understand?
As stated, the question comes down to acting on an opinion you have on an unknown, but within the principles of this problem potentially knowable conclusion about your own utility function. And that is: Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
ALMOST by definition of the word utility, you would choose the truth (white box) if and only if 1) is larger and you would choose the falsehood (black box) only if 2) is larger. I say almost by definition because all answers of the form “I would choose the truth even if it was worse for me” are really statements that the utility you place on the truth is higher than Omega has assumed, which violates the assumption that Omega knows your utility function and speaks truthfully about it.
I say ALMOST by definition because we have to consider the other piece of the puzzle: when I open box 2) there is a machine that “will reprogram your mind.” Does this change anything? Well it depends on which utility function Omega is using to make her calculations of my long term utility. Is Omega using my utility function BEFORE the machine reprogram’s my mind, or after? Is me after the reprogramming really still me after the reprogramming? I think within the spirit of the problem we must assume that 1) The utility happens to be maximized for both me before the reprogram and me after the reprogram, perhaps my utility function does not change at all in the reprogramming, 2) Omega has correctly included the amount of disutility I would have to the particular programming change, and this is factored in to her calculations so that the proposed falsehood and mind reprogramming do in fact, on net, give the maximum utility I can get from knowing the falsehood PLUS being reprogrammed.
Within these constraints, we find that the “ALMOST” above can be removed if we include the (dis)utility I have for the reprogramming in the calculation. So:
Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you AND being reprogrammed to believe it OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
So ultimately, the question which would we choose is the question above. I think to say anything else is to say “my utility is not my utility,” i.e. to contradict yourself.
In my case, I would choose the white box. On reflection, considering the long run, I doubt that there is a falsehood PLUS a reprogramming that I would accept as a combination as more utile than the worst true fact (with no reprogramming to consider) that I would ever expect to get. Certainly, this is the Occam’s razor answer, the ceteris paribus answer. GENERALLY, we believe that knowing more is better for us than being wrong. Generally we believe that someone else meddling with our minds has a high disutility to us.
For completeness I think these are straightforward conclusions from “playing fair” in this question, from accepting an Omega as postulated.
1) If Omega assures you the utility for 2) (including the disutility of the reprogramming as experienced by your pre-reprogrammed self) is 1% higher than the utility of 1), then you want to choose 2), to choose the falsehood and the reprogramming. To give any other answer is to presume that Omega is wrong about your utility , which violates the assumptions of the question.
2) If Omega assures you the utility for 2) and 1) are equal, it doesn’t matter which one you choose. As much as you might think “all other things being equal I’ll choose the truth” you must accept that the value you place on the truth has already been factored in, and the blip-up from choosing the truth will be balanced by some other disutility in a non-truth area. Since you can be pretty sure that the utility you place on the truth is very much unrelated to pain and pleasure and joy and love and so on, you are virtually guaranteeing you will FEEL worse choosing the truth, but that this worse feeling will just barely be almost worth it.
Finally, I tried to play nice within the question. But it is entirely possible, and I would say likely, that there can never be an Omega who could know ahead of time with the kind of detail required, what your future utility would be, at least not in our Universe. Consider just the quantum uncertainties (or future Everett universe splits). It seems most likely that your future net utility covers a broad range of outcomes in different Everett branches. In that case, it seems very likely that there is no one truth that minimizes your utility in all your possible futures, and no one falsehood that maximizes it in all your possible futures. In this case we would have a distribution of utility outcomes from 1) and 2) and it is not clear that we know how to choose between two different distributions. Possibly utility is defined in such a way that it would be the expectation value that “truly” mattered to us, but that puts I think a very serious constraint on utility functions and how we interact with them that I am not sure could be supported.
Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it’s information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don’t know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would be something of disutility to you as you are at present, and not something that you would accept if you were aware of it. Accordingly, it it a safe assumption that Omega has based its calculations off your utility before accepting the information, and for the purposes of this problem, that is exactly the case. This is your case (2); if a falsehood intrinsically conflicts with your utility function in whatever way, it generates disutility (and thus, is probably suboptimal). If your utility function is inherently hostile to such changes, this presents a limitation on the factset Omega can impose upon you.
That said, your personal answer seems to place rather conservative bounds on the nature of what Omega can do to you. Omega has not presented bounds on it’s utilities; instead, it has advised you that they are maximized within fairly broad terms. Simialrly, it has not assured you anything about the relative values of those utilities, but the structure of the problem as Omega presents it (which you know is correct, because Omega has already arbitrarily demonstrated it’s power and trustworthiness) means you are dealing with an outcome pump attached directly to your utility function. Since the structure of the problem gives it a great deal of room in which to operate, the only real limitation is the nature of your own utility function. Sure, it’s entirely possible that your utility function could be laid out in such a way as to strongly emphasize the disutility of misinformation… but that just limits the nice things Omega can do for you, it does nothing to save you from the bad things it can do to you. It remains valid to show you a picture and say ‘the picture you are looking at is a basilisk; it causes any human that sees it to die within 48 hours’. Even without assuming basilisks, you’re still dealing with a hostile outcome pump. There’s bound to be some truth that you haven’t considered that will lead you to a bad end. And if you want to examine it in terms of Everett branches, Omega is arbitrarily powerful. It has the power to compute all possible universes and give you the information which has maximally bad consequences for your utility function in aggregate across all possible universes (this implies, of course, that Omega is outside the Matrix, but pretty much any problem invoking Omega does that).
Even so, Omega doesn’t assure you of anything regarding the specific weights of the two pieces of information. Utility functions differ, and since there’s nothing Omega could say that would be valid for all utility functions, there’s nothing it will say at all. It’s left to you to decide which you’d prefer.
That said, I do find it interesting to note under which lines of reasoning people will choose something labelled ‘maximum disutility’. I had thought it to be a more obvious problem than that.
Wire-heading, drug-addicition, lobotomy, black-box, all seem similar morally to me. Heck, my own personal black box would need nothing more than to have me believe that the universe is just a little more absurd than I already believe, that the laws of physics and the progress of humanity are a fever-dream, an hallucination. From there I would lower my resistance to wire-heading, drug-addiction. Even if I still craved the “truth” (my utility function was largely unchanged), these new facts would lead me to believe there was less of a possibility of utility from pursuing that, and so the rather obvious utility of drug or electronic induced pleasure would win my not-quite-factual day.
The white box and a Nazi colonel-dentist with his tools laid out, talking to me about what he was going to do to me until I chose the black box are morally similar. I do not know why the Nazis/Omega want me to black box it. I do not know the extent of the disutility the colonel-dentist will actually inflict upon me. I do know m fear is at minimum nearly overwhelming, and may indeed overwhelm me before the day is done.
Being broken in the sense that those who torture you for a result, and choosing the black box, are morally equivalent to me. Abandoning a long-term principle of commitment to the truth in favor of a short term but very high utility of giving up, the short term utility of totally abandoning myself in to the control of an evil god to avoid his torture is what I am being asked to do in choosing the black box.
Its ALWAYS at least a little scary to choose reality over self-deception, over the euphoria of drugs and pain killers. The utility one derives from making this choice is much colder than the utility one derives from succumbing: it comes more, it seems, from the neo-cortex and less from the limbic system or lizard brain of fast fear responses.
My utility AFTER I choose the white box may well be less than if I chose the black box. The scary thing in the white box might be that bad. But my life up to now has rewarded me vastly for resisting drug addiction, for resisting gorping my own brain in the pursuit of non-reality-based pleasure. Indeed, it has rewarded me for resisting fear.
So before I have made my choice, I do not want to choose the lie in order to get the dopamine, or the epinephrine or whatever it is that the wire gives me. That is LOW utility to me before I make the choice. Resisting choosing that out of fear has high utility to me.
WIll I regret my choice afterwards? Maybe, since I might be a broken destroyed shell of a human subject to brain patterns for which I had no evolutionary preparation.
Would I admire someone who chose the black box? No. Would I admire someone who had chosen the white box? Yes. Doing things that I would admire in others is a strong source of utility in me (and in many others of course).
Do you think your omega problem contains elements that go beyond the question: would you abandon your principled commitment to truth and choose believing a lie and wire-heading under the threat of an unknown future torture inflicted upon you by a powerful entity you cannot and do not understand?
Curious to know what you think of Michaelos’ construction of the white-box.
Thank you for that link, reading it helped me clarify my answer.