Omega shows up and presents a Decision Challenge, consisting of some assortment of your favorite decision theory puzzlers. (Newcomb, etc etc etc...)
Unbeknownst to you, however, Omega also has a secret additional test: If the decisions you make are all something OTHER than the normal rational ones, then Omega will pay you some huge superbonus of utilions, vastly dwarfing any cost to loosing all of the individual challenges...
However, Omega also models you and if you would have willingly “failed” HAD YOU KNOWN about the extra challenge above, (but not this extra extra criteria), then you get no bonus for failing everything.
That is not Omega. Omega as presented for the purpose of Newcomblike problems is known, for the sake of the hypothetical, to be trustworthy. He does not deceive us about our utility payoffs. And yes, that does include being technically truthful but leaving off a whole utility-payoff category. If the is not clear to the audience from the description given in the problem definition then the problem definition needs to be more pedantic.
Consider, for example, Vladmir’s original definition of counterfactual mugging. He throws in “the Omega is also known to be absolutely honest and trustworthy, no word-twisting, so the facts are really as it says”. It should be fairly clear to the reader that ’unbeknownst to you” is to be considered out of scope of the exercise.
If you want a demigod who plays games that happen to involve him making us have an inaccurate knowledge of his arbitrary utility payoffs then you need to invent a new name.
None of Parfit’s Hitch-hiker, Prisoner’s Dilemma, Newcomb’s or Counterfactual Mugging rely on the kind of ‘payoff for being irrational’ difficulties you present. They are all instances where a decision algorithm that wins will also win in the regular situations that they caricaturize.
That is not Omega. Omega as presented for the purpose of Newcomblike problems is known, for the sake of the hypothetical, to be trustworthy. He does not deceive us about our utility payoffs. And yes, that does include being technically truthful but leaving off a whole utility-payoff category. If the is not clear to the audience from the description given in the problem definition then the problem definition needs to be more pedantic.
Consider, for example, Vladmir’s original definition of counterfactual mugging. He throws in “the Omega is also known to be absolutely honest and trustworthy, no word-twisting, so the facts are really as it says”. It should be fairly clear to the reader that ’unbeknownst to you” is to be considered out of scope of the exercise.
If you want a demigod who plays games that happen to involve him making us have an inaccurate knowledge of his arbitrary utility payoffs then you need to invent a new name.
None of Parfit’s Hitch-hiker, Prisoner’s Dilemma, Newcomb’s or Counterfactual Mugging rely on the kind of ‘payoff for being irrational’ difficulties you present. They are all instances where a decision algorithm that wins will also win in the regular situations that they caricaturize.