“What the Omega does to predict your decision doesn’t affect you, shouldn’t concern you, it looks like only that it’s usually right is relevant.”
Is this the least convenient world? What Omega does to predict my decision does concern me, because it determines whether I should one-box or two-box. However, I’m willing to allow that in a LCW, I’m not given enough information. Is this the Newcomb “problem”, then—how to make rational decision when you’re not given enough information?
“What the Omega does to predict your decision doesn’t affect you, shouldn’t concern you, it looks like only that it’s usually right is relevant.”
Is this the least convenient world? What Omega does to predict my decision does concern me, because it determines whether I should one-box or two-box. However, I’m willing to allow that in a LCW, I’m not given enough information. Is this the Newcomb “problem”, then—how to make rational decision when you’re not given enough information?