Does the utility calculation from the false belief include utility from the other beliefs I will have to overwrite? For example, suppose the false belief is “I can fly”. At some point, clearly, I will have to rationalise away the pain of my broken legs from jumping off a cliff. Short of reprogramming my mind to really not feel the pain anymore—and then we’re basically talking about wireheading—it seems hard to come up with any fact, true or false, that will provide enough utility to overcome that sort of thing.
I additionally note that the maximum disutility has no lower bound, in the problem as given; for all I know it’s the equivalent of three cents. Likewise the maximum utility has no lower bound, in the problem as written. Perhaps Omega ought to provide some comparisons; for example he might say that the disutility of knowing the true fact is at least equal to breaking an arm, or some such calibration.
As the problem is written, I’d take the true fact.
“I can fly” doesn’t sound like a particularly high-utility false belief. It sounds like you are attacking a straw man. I’d assume that if the false information is a package of pieces of false information, then the entire package is optimized for being high-utility.
“I can fly” doesn’t sound like a particularly high-utility false belief.
True, but that’s part of my point: The problem does not specify that the false belief has high utility, only that it has the highest possible utility. No lower bound.
Additionally, any false belief will bring you into conflict with reality eventually. “I can fly” just illustrates this dramatically.
Of course there will be negative-utility results of most false beliefs. This does not prove that all false beliefs will be net negative utility. The vastness of the space of possible beliefs should suggest that there are likely to be many approximately harmless false ones, and some very beneficial ones, despite the tendency for false beliefs to be negative utility.
In fact, Kindly gives an example of each here.
In the example of believing some sufficiently hard to factor composite to be prime, you would not naturally be able to cause a conflict anyway, since it is too hard to show that it is not prime. In the FAI example, it might have to keep you in the dark for a while and then fool you into thinking that someone else had created an FAI separately so you wouldn’t have to know that your game was actually an FAI. The negative utility from this conflict resolution would be negligible compared to the benefits. The negative utility arising from belief conflict resolution in your example of “I can fly” does not even come close to generalizing to all possible false beliefs.
As written, the utility calculation explicitly specifies ‘long-term’ utility; it is not a narrow calculation. This is Omega we’re dealing with, it’s entirely possible that it mapped your utility function from scanning your brain, and checked all possible universes forward in time from the addition of all possible facts to your mind, and took the worst and best true/false combination.
Accordingly, a false belief that will lead you to your death or maiming is almost certainly non-optimal. No, this is the one false thing that has the best long-term consequences for you, as you value such things, out of all the false things you could possibly believe.
True, the maximum utility/disutility has no lower bound. This is intentional. If you really believe that your position is such that no true information can hurt you, and/or no false information can benefit you, then you could take the truth. This is explicitly the truth with the worst possible long-term consequences for whatever it is you value.
Yes, it’s pretty much defined as a sucker bet, implying that Omega is attempting to punish people for believing that there is no harmful true information and no advantageous false information. If you did, in fact, believe that you couldn’t possibly gain by believing a falsehood, or suffer from learning a truth, this is the least convenient possible world.
Does the utility calculation from the false belief include utility from the other beliefs I will have to overwrite? For example, suppose the false belief is “I can fly”. At some point, clearly, I will have to rationalise away the pain of my broken legs from jumping off a cliff. Short of reprogramming my mind to really not feel the pain anymore—and then we’re basically talking about wireheading—it seems hard to come up with any fact, true or false, that will provide enough utility to overcome that sort of thing.
I additionally note that the maximum disutility has no lower bound, in the problem as given; for all I know it’s the equivalent of three cents. Likewise the maximum utility has no lower bound, in the problem as written. Perhaps Omega ought to provide some comparisons; for example he might say that the disutility of knowing the true fact is at least equal to breaking an arm, or some such calibration.
As the problem is written, I’d take the true fact.
“I can fly” doesn’t sound like a particularly high-utility false belief. It sounds like you are attacking a straw man. I’d assume that if the false information is a package of pieces of false information, then the entire package is optimized for being high-utility.
True, but that’s part of my point: The problem does not specify that the false belief has high utility, only that it has the highest possible utility. No lower bound.
Additionally, any false belief will bring you into conflict with reality eventually. “I can fly” just illustrates this dramatically.
Of course there will be negative-utility results of most false beliefs. This does not prove that all false beliefs will be net negative utility. The vastness of the space of possible beliefs should suggest that there are likely to be many approximately harmless false ones, and some very beneficial ones, despite the tendency for false beliefs to be negative utility. In fact, Kindly gives an example of each here.
In the example of believing some sufficiently hard to factor composite to be prime, you would not naturally be able to cause a conflict anyway, since it is too hard to show that it is not prime. In the FAI example, it might have to keep you in the dark for a while and then fool you into thinking that someone else had created an FAI separately so you wouldn’t have to know that your game was actually an FAI. The negative utility from this conflict resolution would be negligible compared to the benefits. The negative utility arising from belief conflict resolution in your example of “I can fly” does not even come close to generalizing to all possible false beliefs.
As written, the utility calculation explicitly specifies ‘long-term’ utility; it is not a narrow calculation. This is Omega we’re dealing with, it’s entirely possible that it mapped your utility function from scanning your brain, and checked all possible universes forward in time from the addition of all possible facts to your mind, and took the worst and best true/false combination.
Accordingly, a false belief that will lead you to your death or maiming is almost certainly non-optimal. No, this is the one false thing that has the best long-term consequences for you, as you value such things, out of all the false things you could possibly believe.
True, the maximum utility/disutility has no lower bound. This is intentional. If you really believe that your position is such that no true information can hurt you, and/or no false information can benefit you, then you could take the truth. This is explicitly the truth with the worst possible long-term consequences for whatever it is you value.
Yes, it’s pretty much defined as a sucker bet, implying that Omega is attempting to punish people for believing that there is no harmful true information and no advantageous false information. If you did, in fact, believe that you couldn’t possibly gain by believing a falsehood, or suffer from learning a truth, this is the least convenient possible world.