There may also be some people [who] have doubts about whether a perfect predictor is possible even in theory.
While perfect predictors are possible, perfect predictors who give you some information about their prediction are often impossible. Since you learn of their prediction, you really can just do the opposite. This is not a problem here, because Omega doesn’t care if he leaves the box empty and you one-box anyway, but its not something to forget about in general.
The trick in open box Newcomb’s is that it either predicts whether you will one-box if you see a full box or not. If you are the kind of agent who always does “the opposite” you’ll see an empty box and one-box. Which isn’t actually a problem as it only predicted whether you’d one-box if you saw a full-box.
While perfect predictors are possible, perfect predictors who give you some information about their prediction are often impossible. Since you learn of their prediction, you really can just do the opposite. This is not a problem here, because Omega doesn’t care if he leaves the box empty and you one-box anyway, but its not something to forget about in general.
The trick in open box Newcomb’s is that it either predicts whether you will one-box if you see a full box or not. If you are the kind of agent who always does “the opposite” you’ll see an empty box and one-box. Which isn’t actually a problem as it only predicted whether you’d one-box if you saw a full-box.
Thats… exacty what my last sentence meant. Are you repeating on purpose or was my explanation so unclear?
Oh sorry, hadn’t fully woken up when I read your comment