The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it’s hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
As for the third, I suspect that there isn’t a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don’t produce an answer you’d expect; they instead produce something that matches the criteria even better then anything you were aware of.
The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.
The example fails the “that you would normally reject outright” criterion though, unless I already have well established knowledge of the actual train scheduling times.
The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it’s hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they’re wrong; when I arrive, I will find out that the train has already left.
As for the third, I suspect that there isn’t a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.
Regardless of whether the third one is plausible, I suspect Omega would know of some hack that is equally weird and unable to be anticipated.
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don’t produce an answer you’d expect; they instead produce something that matches the criteria even better then anything you were aware of.
You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.
The example fails the “that you would normally reject outright” criterion though, unless I already have well established knowledge of the actual train scheduling times.