Your prediction of what Omega does is just as recursive as as Omega’s prediction. But if you actually make a decision at some point that means that your decision algorithm has an escape clause (ow! my brain hurts!) which means that Omega can predict what you’re going to do (by modelling the all the recursions you did).
but the problem as normally posed implies that there’s an algorithm for producing the optimal result and that I am capable of running such an algorithm
It doesn’t actually. The optimal result is two boxing when Omega thinks you are going to one box. But since Omega is a God-like super computer and you aren’t that isn’t going to happen. If you happen to have more information about Omega than it has about you and the hardware to run a simulation of Omega then you can win like this. But that isn’t the thought experiment.
My point (or the second part of it) is that simply by asking “what should you do to achieve an optimal result”, the question assumes that your reasoning capacity is good enough to compute the optimal result. If computing the optimal result requires being able to simulate Omega, then the original question implicitly assumes that you are able to simulate Omega.
Where does the question assume that you can compute the optimal result? Newcomb’s Problem simply poses a hypothetical and asks ‘What would you do?‘. Some people think they’ve gotten the right answer; others are less confident. But no answer should need to presuppose at the outset that we can arrive at the very best answer no matter what; if it did, that would show the impossibility of getting the right answer, not the trustworthiness of the ‘I can optimally answer this question’ postulate.
I once had a man walk up to me and ask me if I had the correct time. I looked at my watch and told him the time. But it seemed a little odd that he asked for the correct time. Did he think that if he didn’t specify the qualifier “correct”, I might be uncertain whether I should give him the correct or incorrect time?
I think that asking what you would do, in the context of a reasoning problem, carries the implication “figure out the correct choice” even if you are not being explicitly asked what is correct. Besides, the problem is seldom worded exactly the same way each time and some formulations of it do ask for the correct answer.
For the record, I would one-box, but I don’t actually think that finding the correct answer requires simulating Omega. But I can think of variations of the problem where finding the correct answer does require being able to simulate Omega (or worse yet, produces a self-reference paradox without anyone having to simulate Omega.)
Your prediction of what Omega does is just as recursive as as Omega’s prediction. But if you actually make a decision at some point that means that your decision algorithm has an escape clause (ow! my brain hurts!) which means that Omega can predict what you’re going to do (by modelling the all the recursions you did).
It doesn’t actually. The optimal result is two boxing when Omega thinks you are going to one box. But since Omega is a God-like super computer and you aren’t that isn’t going to happen. If you happen to have more information about Omega than it has about you and the hardware to run a simulation of Omega then you can win like this. But that isn’t the thought experiment.
My point (or the second part of it) is that simply by asking “what should you do to achieve an optimal result”, the question assumes that your reasoning capacity is good enough to compute the optimal result. If computing the optimal result requires being able to simulate Omega, then the original question implicitly assumes that you are able to simulate Omega.
Right, I just don’t agree that the question assumes that.
Where does the question assume that you can compute the optimal result? Newcomb’s Problem simply poses a hypothetical and asks ‘What would you do?‘. Some people think they’ve gotten the right answer; others are less confident. But no answer should need to presuppose at the outset that we can arrive at the very best answer no matter what; if it did, that would show the impossibility of getting the right answer, not the trustworthiness of the ‘I can optimally answer this question’ postulate.
I once had a man walk up to me and ask me if I had the correct time. I looked at my watch and told him the time. But it seemed a little odd that he asked for the correct time. Did he think that if he didn’t specify the qualifier “correct”, I might be uncertain whether I should give him the correct or incorrect time?
I think that asking what you would do, in the context of a reasoning problem, carries the implication “figure out the correct choice” even if you are not being explicitly asked what is correct. Besides, the problem is seldom worded exactly the same way each time and some formulations of it do ask for the correct answer.
For the record, I would one-box, but I don’t actually think that finding the correct answer requires simulating Omega. But I can think of variations of the problem where finding the correct answer does require being able to simulate Omega (or worse yet, produces a self-reference paradox without anyone having to simulate Omega.)