This case looks most like the ‘transparent boxes’ version of the problem, which I haven’t read much about.
In Newcomb’s problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.
In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.
The other interesting part of this particular scenario is how to define ‘blackmail’ and differentiate it from, say, someone accidentally doing something that’s harmful to you and asking you to help fix it. We’ve approached that issue, too, but I’m not sure if it’s been given a thorough treatment yet.
This case looks most like the ‘transparent boxes’ version of the problem, which I haven’t read much about.
In Newcomb’s problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.
In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.
The other interesting part of this particular scenario is how to define ‘blackmail’ and differentiate it from, say, someone accidentally doing something that’s harmful to you and asking you to help fix it. We’ve approached that issue, too, but I’m not sure if it’s been given a thorough treatment yet.