I think of Omega as a simplified stand-in for other people.
The part about Omega being omniscient and knowably trustworthy isn’t solved. But I think the problem of Omega rewarding bizarre irrational behaviour on your part mostly goes away if you assume it’s fairly human-like, perhaps following UDT or some other decision theory itself. The human motivation for it posing Newcomb’s problem could be that it wants one of the boxes kept closed for some reason, and will reward you for keeping it closed. To make it fit this explanation, Omega should say it doesn’t want you to open the box, and preferably give a reason.
Kinds of things the human-like Omega might do:
trust you or not based on it’s prediction of your behaviour.
prefer you to be rewarded if you act how it wants.
prefer you be punished if you harm it.
tell you what it wants of you.
But it should be less likely to reward you for acting irrational for no reason, or for doing what it wants you not to do.
I think of Omega as a simplified stand-in for other people.
The part about Omega being omniscient and knowably trustworthy isn’t solved. But I think the problem of Omega rewarding bizarre irrational behaviour on your part mostly goes away if you assume it’s fairly human-like, perhaps following UDT or some other decision theory itself. The human motivation for it posing Newcomb’s problem could be that it wants one of the boxes kept closed for some reason, and will reward you for keeping it closed. To make it fit this explanation, Omega should say it doesn’t want you to open the box, and preferably give a reason.
Kinds of things the human-like Omega might do:
trust you or not based on it’s prediction of your behaviour.
prefer you to be rewarded if you act how it wants.
prefer you be punished if you harm it.
tell you what it wants of you.
But it should be less likely to reward you for acting irrational for no reason, or for doing what it wants you not to do.