I agree that “rationality” should be the thing that makes you win but the Newcomb paradox seems kind of contrived.
If there is a more powerful entity throwing good utilities at normally dumb decisions and bad utilities at normally good decisions then you can make any dumb thing look genius because you are under different rules than the world we live in at present.
I would ask Alpha for help and do what he tells me to do. Alpha is an AI that is also never wrong when it comes to predicting the future, just like Omega. Alpha would examine omega and me and extrapolate Omega’s extrapolated decision. If there is a million in box B I pick both otherwise just B.
Looks like Omega will be wrong either way, or will I be wrong? Or will the universe crash?
I agree that “rationality” should be the thing that makes you win but the Newcomb paradox seems kind of contrived.
If there is a more powerful entity throwing good utilities at normally dumb decisions and bad utilities at normally good decisions then you can make any dumb thing look genius because you are under different rules than the world we live in at present.
I would ask Alpha for help and do what he tells me to do. Alpha is an AI that is also never wrong when it comes to predicting the future, just like Omega. Alpha would examine omega and me and extrapolate Omega’s extrapolated decision. If there is a million in box B I pick both otherwise just B.
Looks like Omega will be wrong either way, or will I be wrong? Or will the universe crash?