I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.
Vaguely realistic example: You believe that the lottery is a good bet, and as a result win the lottery.
Hollywood example: You believe that the train will leave at 11:10 instead of 10:50, and so miss the train, setting off an improbable-seeming sequence of life-changing events such as meeting your soulmate, getting the job of your dreams, and finding a cure for aging.
Omega example: You believe that “hepaticocholangiocholecystenterostomies” refers to surgeries linking the gall bladder to the kidney. This subtly changes the connections in your brain such that over time you experience a great deal more joy in life, as well as curing your potential for Alzheimer’s.
The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it’s hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
As for the third, I suspect that there isn’t a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don’t produce an answer you’d expect; they instead produce something that matches the criteria even better then anything you were aware of.
The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.
The example fails the “that you would normally reject outright” criterion though, unless I already have well established knowledge of the actual train scheduling times.
Vaguely realistic example: You believe that the lottery is a good bet, and as a result win the lottery.
Hollywood example: You believe that the train will leave at 11:10 instead of 10:50, and so miss the train, setting off an improbable-seeming sequence of life-changing events such as meeting your soulmate, getting the job of your dreams, and finding a cure for aging.
Omega example: You believe that “hepaticocholangiocholecystenterostomies” refers to surgeries linking the gall bladder to the kidney. This subtly changes the connections in your brain such that over time you experience a great deal more joy in life, as well as curing your potential for Alzheimer’s.
The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it’s hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
As for the third, I suspect that there isn’t a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.
Regardless of whether the third one is plausible, I suspect Omega would know of some hack that is equally weird and unable to be anticipated.
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don’t produce an answer you’d expect; they instead produce something that matches the criteria even better then anything you were aware of.
You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.
The example fails the “that you would normally reject outright” criterion though, unless I already have well established knowledge of the actual train scheduling times.