Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it’s information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don’t know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would be something of disutility to you as you are at present, and not something that you would accept if you were aware of it. Accordingly, it it a safe assumption that Omega has based its calculations off your utility before accepting the information, and for the purposes of this problem, that is exactly the case. This is your case (2); if a falsehood intrinsically conflicts with your utility function in whatever way, it generates disutility (and thus, is probably suboptimal). If your utility function is inherently hostile to such changes, this presents a limitation on the factset Omega can impose upon you.
That said, your personal answer seems to place rather conservative bounds on the nature of what Omega can do to you. Omega has not presented bounds on it’s utilities; instead, it has advised you that they are maximized within fairly broad terms. Simialrly, it has not assured you anything about the relative values of those utilities, but the structure of the problem as Omega presents it (which you know is correct, because Omega has already arbitrarily demonstrated it’s power and trustworthiness) means you are dealing with an outcome pump attached directly to your utility function. Since the structure of the problem gives it a great deal of room in which to operate, the only real limitation is the nature of your own utility function. Sure, it’s entirely possible that your utility function could be laid out in such a way as to strongly emphasize the disutility of misinformation… but that just limits the nice things Omega can do for you, it does nothing to save you from the bad things it can do to you. It remains valid to show you a picture and say ‘the picture you are looking at is a basilisk; it causes any human that sees it to die within 48 hours’. Even without assuming basilisks, you’re still dealing with a hostile outcome pump. There’s bound to be some truth that you haven’t considered that will lead you to a bad end. And if you want to examine it in terms of Everett branches, Omega is arbitrarily powerful. It has the power to compute all possible universes and give you the information which has maximally bad consequences for your utility function in aggregate across all possible universes (this implies, of course, that Omega is outside the Matrix, but pretty much any problem invoking Omega does that).
Even so, Omega doesn’t assure you of anything regarding the specific weights of the two pieces of information. Utility functions differ, and since there’s nothing Omega could say that would be valid for all utility functions, there’s nothing it will say at all. It’s left to you to decide which you’d prefer.
That said, I do find it interesting to note under which lines of reasoning people will choose something labelled ‘maximum disutility’. I had thought it to be a more obvious problem than that.
Wire-heading, drug-addicition, lobotomy, black-box, all seem similar morally to me. Heck, my own personal black box would need nothing more than to have me believe that the universe is just a little more absurd than I already believe, that the laws of physics and the progress of humanity are a fever-dream, an hallucination. From there I would lower my resistance to wire-heading, drug-addiction. Even if I still craved the “truth” (my utility function was largely unchanged), these new facts would lead me to believe there was less of a possibility of utility from pursuing that, and so the rather obvious utility of drug or electronic induced pleasure would win my not-quite-factual day.
The white box and a Nazi colonel-dentist with his tools laid out, talking to me about what he was going to do to me until I chose the black box are morally similar. I do not know why the Nazis/Omega want me to black box it. I do not know the extent of the disutility the colonel-dentist will actually inflict upon me. I do know m fear is at minimum nearly overwhelming, and may indeed overwhelm me before the day is done.
Being broken in the sense that those who torture you for a result, and choosing the black box, are morally equivalent to me. Abandoning a long-term principle of commitment to the truth in favor of a short term but very high utility of giving up, the short term utility of totally abandoning myself in to the control of an evil god to avoid his torture is what I am being asked to do in choosing the black box.
Its ALWAYS at least a little scary to choose reality over self-deception, over the euphoria of drugs and pain killers. The utility one derives from making this choice is much colder than the utility one derives from succumbing: it comes more, it seems, from the neo-cortex and less from the limbic system or lizard brain of fast fear responses.
My utility AFTER I choose the white box may well be less than if I chose the black box. The scary thing in the white box might be that bad. But my life up to now has rewarded me vastly for resisting drug addiction, for resisting gorping my own brain in the pursuit of non-reality-based pleasure. Indeed, it has rewarded me for resisting fear.
So before I have made my choice, I do not want to choose the lie in order to get the dopamine, or the epinephrine or whatever it is that the wire gives me. That is LOW utility to me before I make the choice. Resisting choosing that out of fear has high utility to me.
WIll I regret my choice afterwards? Maybe, since I might be a broken destroyed shell of a human subject to brain patterns for which I had no evolutionary preparation.
Would I admire someone who chose the black box? No. Would I admire someone who had chosen the white box? Yes. Doing things that I would admire in others is a strong source of utility in me (and in many others of course).
Do you think your omega problem contains elements that go beyond the question: would you abandon your principled commitment to truth and choose believing a lie and wire-heading under the threat of an unknown future torture inflicted upon you by a powerful entity you cannot and do not understand?
Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it’s information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don’t know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would be something of disutility to you as you are at present, and not something that you would accept if you were aware of it. Accordingly, it it a safe assumption that Omega has based its calculations off your utility before accepting the information, and for the purposes of this problem, that is exactly the case. This is your case (2); if a falsehood intrinsically conflicts with your utility function in whatever way, it generates disutility (and thus, is probably suboptimal). If your utility function is inherently hostile to such changes, this presents a limitation on the factset Omega can impose upon you.
That said, your personal answer seems to place rather conservative bounds on the nature of what Omega can do to you. Omega has not presented bounds on it’s utilities; instead, it has advised you that they are maximized within fairly broad terms. Simialrly, it has not assured you anything about the relative values of those utilities, but the structure of the problem as Omega presents it (which you know is correct, because Omega has already arbitrarily demonstrated it’s power and trustworthiness) means you are dealing with an outcome pump attached directly to your utility function. Since the structure of the problem gives it a great deal of room in which to operate, the only real limitation is the nature of your own utility function. Sure, it’s entirely possible that your utility function could be laid out in such a way as to strongly emphasize the disutility of misinformation… but that just limits the nice things Omega can do for you, it does nothing to save you from the bad things it can do to you. It remains valid to show you a picture and say ‘the picture you are looking at is a basilisk; it causes any human that sees it to die within 48 hours’. Even without assuming basilisks, you’re still dealing with a hostile outcome pump. There’s bound to be some truth that you haven’t considered that will lead you to a bad end. And if you want to examine it in terms of Everett branches, Omega is arbitrarily powerful. It has the power to compute all possible universes and give you the information which has maximally bad consequences for your utility function in aggregate across all possible universes (this implies, of course, that Omega is outside the Matrix, but pretty much any problem invoking Omega does that).
Even so, Omega doesn’t assure you of anything regarding the specific weights of the two pieces of information. Utility functions differ, and since there’s nothing Omega could say that would be valid for all utility functions, there’s nothing it will say at all. It’s left to you to decide which you’d prefer.
That said, I do find it interesting to note under which lines of reasoning people will choose something labelled ‘maximum disutility’. I had thought it to be a more obvious problem than that.
Wire-heading, drug-addicition, lobotomy, black-box, all seem similar morally to me. Heck, my own personal black box would need nothing more than to have me believe that the universe is just a little more absurd than I already believe, that the laws of physics and the progress of humanity are a fever-dream, an hallucination. From there I would lower my resistance to wire-heading, drug-addiction. Even if I still craved the “truth” (my utility function was largely unchanged), these new facts would lead me to believe there was less of a possibility of utility from pursuing that, and so the rather obvious utility of drug or electronic induced pleasure would win my not-quite-factual day.
The white box and a Nazi colonel-dentist with his tools laid out, talking to me about what he was going to do to me until I chose the black box are morally similar. I do not know why the Nazis/Omega want me to black box it. I do not know the extent of the disutility the colonel-dentist will actually inflict upon me. I do know m fear is at minimum nearly overwhelming, and may indeed overwhelm me before the day is done.
Being broken in the sense that those who torture you for a result, and choosing the black box, are morally equivalent to me. Abandoning a long-term principle of commitment to the truth in favor of a short term but very high utility of giving up, the short term utility of totally abandoning myself in to the control of an evil god to avoid his torture is what I am being asked to do in choosing the black box.
Its ALWAYS at least a little scary to choose reality over self-deception, over the euphoria of drugs and pain killers. The utility one derives from making this choice is much colder than the utility one derives from succumbing: it comes more, it seems, from the neo-cortex and less from the limbic system or lizard brain of fast fear responses.
My utility AFTER I choose the white box may well be less than if I chose the black box. The scary thing in the white box might be that bad. But my life up to now has rewarded me vastly for resisting drug addiction, for resisting gorping my own brain in the pursuit of non-reality-based pleasure. Indeed, it has rewarded me for resisting fear.
So before I have made my choice, I do not want to choose the lie in order to get the dopamine, or the epinephrine or whatever it is that the wire gives me. That is LOW utility to me before I make the choice. Resisting choosing that out of fear has high utility to me.
WIll I regret my choice afterwards? Maybe, since I might be a broken destroyed shell of a human subject to brain patterns for which I had no evolutionary preparation.
Would I admire someone who chose the black box? No. Would I admire someone who had chosen the white box? Yes. Doing things that I would admire in others is a strong source of utility in me (and in many others of course).
Do you think your omega problem contains elements that go beyond the question: would you abandon your principled commitment to truth and choose believing a lie and wire-heading under the threat of an unknown future torture inflicted upon you by a powerful entity you cannot and do not understand?