The answer to this problem is only obvious because it’s framed in terms of utility. Utility is, by definition, the thing you want. Strictly speaking, this should include any utility you get from your satisfaction at knowing the truth rather than a lie.
So for someone who valued knowing the truth highly enough, this problem actually should be impossible for Omega to construct.
Okay, so you are a mutant, and you inexplicably value nothing but truth. Fine.
The falsehood can still be a list of true things, tagged with ‘everything on this list is true’, but with an inconsequential falsehood mixed in, and it will still have net long-term utility for the truth-desiring utility function, particularly since you will soon be able to identify the falsehood, and with your mutant mind, quickly locate and eliminate the discrepancy.
The truth has been defined as something that cannot lower the accuracy of your beliefs, yet it still has maximum possible long-term disutility, and your utility function is defined exclusively in terms of the accuracy of your beliefs. Fine. Mutant that you are, the truth of maximum disutility is one which will lead you directly to a very interesting problem that will distract you for an extended period of time, but which you will ultimately be unable to solve. This wastes a great deal of your time, but leaves you with no greater utility than you had before, constituting disutility in terms of the opportunity cost of that time which you could’ve spent learning other things. Maximum disutility could mean that this is a problem that will occupy you for the rest of your life, stagnating your attempts to learn much of anything else.
So for someone who valued knowing the truth highly enough, this problem actually should be impossible for Omega to construct.
Not necessarily: the problem only stipulates that of all truths you are told the worst truth, and of all falsehoods the best falsehood. If all you value is truth and you can’t be hacked, then it’s possible that the worst truth still has positive utility, and the best falsehood still has negative utility.
Can we solve this problem by slightly modifying the hypothetical to say that Omega is computing your utility function perfectly in every respect except for whatever extent you care about truth for its own sake? Depending on exactly how we define Omega’s capabilities and the concept of utility, there probably is a sense in which the answer really is determined by definition (or in which the example is impossible to construct). But I took the spirit of the question to be “you are effectively guaranteed to get a massively huge dose of utility/disutility in basically every respect, but it’s the product of believing a false/true statement—what say you?”
The answer to this problem is only obvious because it’s framed in terms of utility. Utility is, by definition, the thing you want. Strictly speaking, this should include any utility you get from your satisfaction at knowing the truth rather than a lie.
So for someone who valued knowing the truth highly enough, this problem actually should be impossible for Omega to construct.
Okay, so you are a mutant, and you inexplicably value nothing but truth. Fine.
The falsehood can still be a list of true things, tagged with ‘everything on this list is true’, but with an inconsequential falsehood mixed in, and it will still have net long-term utility for the truth-desiring utility function, particularly since you will soon be able to identify the falsehood, and with your mutant mind, quickly locate and eliminate the discrepancy.
The truth has been defined as something that cannot lower the accuracy of your beliefs, yet it still has maximum possible long-term disutility, and your utility function is defined exclusively in terms of the accuracy of your beliefs. Fine. Mutant that you are, the truth of maximum disutility is one which will lead you directly to a very interesting problem that will distract you for an extended period of time, but which you will ultimately be unable to solve. This wastes a great deal of your time, but leaves you with no greater utility than you had before, constituting disutility in terms of the opportunity cost of that time which you could’ve spent learning other things. Maximum disutility could mean that this is a problem that will occupy you for the rest of your life, stagnating your attempts to learn much of anything else.
Not necessarily: the problem only stipulates that of all truths you are told the worst truth, and of all falsehoods the best falsehood. If all you value is truth and you can’t be hacked, then it’s possible that the worst truth still has positive utility, and the best falsehood still has negative utility.
Can we solve this problem by slightly modifying the hypothetical to say that Omega is computing your utility function perfectly in every respect except for whatever extent you care about truth for its own sake? Depending on exactly how we define Omega’s capabilities and the concept of utility, there probably is a sense in which the answer really is determined by definition (or in which the example is impossible to construct). But I took the spirit of the question to be “you are effectively guaranteed to get a massively huge dose of utility/disutility in basically every respect, but it’s the product of believing a false/true statement—what say you?”