This sort of thing is really the motivating example behind Newcomb’s problem.
I’m not seeing the analogy. Can you explain?
The extortion attempt cost the aliens almost nothing, and would have given them a vacant solar system to move into if someone like Fred was in power, so it’s rational for them to make the attempt almost regardless of the odds of succeeding. Nobody is reading anybody else’s mind here, except the idiots who read their own minds and uploaded them to the Internet, and they don’t seem to be making any of the choices.
This case looks most like the ‘transparent boxes’ version of the problem, which I haven’t read much about.
In Newcomb’s problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.
In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.
The other interesting part of this particular scenario is how to define ‘blackmail’ and differentiate it from, say, someone accidentally doing something that’s harmful to you and asking you to help fix it. We’ve approached that issue, too, but I’m not sure if it’s been given a thorough treatment yet.
I’m not seeing the analogy. Can you explain?
The extortion attempt cost the aliens almost nothing, and would have given them a vacant solar system to move into if someone like Fred was in power, so it’s rational for them to make the attempt almost regardless of the odds of succeeding. Nobody is reading anybody else’s mind here, except the idiots who read their own minds and uploaded them to the Internet, and they don’t seem to be making any of the choices.
This case looks most like the ‘transparent boxes’ version of the problem, which I haven’t read much about.
In Newcomb’s problem, Omega offers a larger amount of utility if you will predictably do something that intuitively would give a smaller amount of utility.
In this situation, being less open to blackmail probably gives you less disutility in the long run (fewer instances of people trying to blackmail you) than acceding to the blackmail, even though acceding intuitively gives you less disutility.
The other interesting part of this particular scenario is how to define ‘blackmail’ and differentiate it from, say, someone accidentally doing something that’s harmful to you and asking you to help fix it. We’ve approached that issue, too, but I’m not sure if it’s been given a thorough treatment yet.
They had other choices though. It would have been similarly inexpensive to offer to simulate happy people.
Even limiting the spheres to a single proof-of-concept would have been a start.