It certainly seems like a simple resolution exists...
As a rationalist, there should only ever be one choice you make. It should be the ideal choice. If you are a perfectly rational person, you will only ever make the ideal choice. You are certainly at least, deterministic. If you can make the ideal choice, so can someone else. That means, if someone knows your exact situation (trivial in the Newcomb paradox, as the super intelligent agent is causing your situation) then they can predict exactly what you will do, even without being perfectly rational themselves. If you know they are predicting you, and will act in a certain way accordingly, the rational solution is simply to follow through on whichever prediction is most profitable, as if they could actually see the future to make such a prediction correctly. Since you’re deterministic, that you will do this is predictable, and thus, the prediction is self-fulfilling.
Perhaps we’ve all heard a slightly different wording of the paradox (or more), but I don’t see what causation has to do with it.
He knows what your environmental circumstances are because he put you in them. That is, he obviously knows that you are going to be encountering a Newcomblike problem because he just gave it to you. (ie. No deep technical meaning, just the obvious.)
Maybe I’m being dense. Omega needs to know more than just that you are going to encounter the problem, even Omega’s scheduler and publicist know that!
Omega knows the exact situation, including how an identical model of you would act/has acted, because that is stipulated, but it does not follow trivially from Omega’s causing your situation.
It certainly seems like a simple resolution exists...
As a rationalist, there should only ever be one choice you make. It should be the ideal choice. If you are a perfectly rational person, you will only ever make the ideal choice. You are certainly at least, deterministic. If you can make the ideal choice, so can someone else. That means, if someone knows your exact situation (trivial in the Newcomb paradox, as the super intelligent agent is causing your situation) then they can predict exactly what you will do, even without being perfectly rational themselves. If you know they are predicting you, and will act in a certain way accordingly, the rational solution is simply to follow through on whichever prediction is most profitable, as if they could actually see the future to make such a prediction correctly. Since you’re deterministic, that you will do this is predictable, and thus, the prediction is self-fulfilling.
Welcome to Less Wrong!
Why do you think so?
I think so too.
Perhaps we’ve all heard a slightly different wording of the paradox (or more), but I don’t see what causation has to do with it.
He knows what your environmental circumstances are because he put you in them. That is, he obviously knows that you are going to be encountering a Newcomblike problem because he just gave it to you. (ie. No deep technical meaning, just the obvious.)
Maybe I’m being dense. Omega needs to know more than just that you are going to encounter the problem, even Omega’s scheduler and publicist know that!
Omega knows the exact situation, including how an identical model of you would act/has acted, because that is stipulated, but it does not follow trivially from Omega’s causing your situation.