I’m not sure this scenario even makes sense as a hypothetical. At least for me personally, I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.
If such a thing is possible, then I’d pick the false belief, since utility is necessarily better than disutility and I’m in no position to second guess Omega’s assurance about which option will bring more, and there’s no meta-utility on the basis of which I can be persuaded to choose things that go against my current utility function. But even granting the existence of Omega as a hypothetical I’d bet against this scenario being able to happen to me.
Edit: this comment has made me realize that I was working under the implicit assumption that the false belief could not be something that would deliver its utility while being proven wrong. If I include such possibilities, there are definitely many ways that my utility could be improved by being convinced of a falsehood, but I would only be temporarily convinced, whereas I parsed the dilemma as one where my utility is increased as long as I continue to believe the falsehood.
I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.
Vaguely realistic example: You believe that the lottery is a good bet, and as a result win the lottery.
Hollywood example: You believe that the train will leave at 11:10 instead of 10:50, and so miss the train, setting off an improbable-seeming sequence of life-changing events such as meeting your soulmate, getting the job of your dreams, and finding a cure for aging.
Omega example: You believe that “hepaticocholangiocholecystenterostomies” refers to surgeries linking the gall bladder to the kidney. This subtly changes the connections in your brain such that over time you experience a great deal more joy in life, as well as curing your potential for Alzheimer’s.
The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it’s hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
As for the third, I suspect that there isn’t a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don’t produce an answer you’d expect; they instead produce something that matches the criteria even better then anything you were aware of.
The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.
The example fails the “that you would normally reject outright” criterion though, unless I already have well established knowledge of the actual train scheduling times.
I’m not sure this scenario even makes sense as a hypothetical. At least for me personally, I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.
If such a thing is possible, then I’d pick the false belief, since utility is necessarily better than disutility and I’m in no position to second guess Omega’s assurance about which option will bring more, and there’s no meta-utility on the basis of which I can be persuaded to choose things that go against my current utility function. But even granting the existence of Omega as a hypothetical I’d bet against this scenario being able to happen to me.
Edit: this comment has made me realize that I was working under the implicit assumption that the false belief could not be something that would deliver its utility while being proven wrong. If I include such possibilities, there are definitely many ways that my utility could be improved by being convinced of a falsehood, but I would only be temporarily convinced, whereas I parsed the dilemma as one where my utility is increased as long as I continue to believe the falsehood.
Vaguely realistic example: You believe that the lottery is a good bet, and as a result win the lottery.
Hollywood example: You believe that the train will leave at 11:10 instead of 10:50, and so miss the train, setting off an improbable-seeming sequence of life-changing events such as meeting your soulmate, getting the job of your dreams, and finding a cure for aging.
Omega example: You believe that “hepaticocholangiocholecystenterostomies” refers to surgeries linking the gall bladder to the kidney. This subtly changes the connections in your brain such that over time you experience a great deal more joy in life, as well as curing your potential for Alzheimer’s.
The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it’s hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
As for the third, I suspect that there isn’t a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.
Regardless of whether the third one is plausible, I suspect Omega would know of some hack that is equally weird and unable to be anticipated.
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don’t produce an answer you’d expect; they instead produce something that matches the criteria even better then anything you were aware of.
You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.
The example fails the “that you would normally reject outright” criterion though, unless I already have well established knowledge of the actual train scheduling times.