That just means the AI cares about a particular class of decision theories rather than a specific one like Dick Kick’em. I could re-run the same thought experiment but instead Dick Kick’em says:
”I am going to read you mind and if you believe in a decision theory that one-boxes in Newcomb’s Paradox I will leave you alone, but if you believe in any other decision theory I will kick you in the dick”
In this variation, Dick Kick’em would be judging the agent based on the exact same criterea that the AI in Newcomb’s problem is using. All I have done is remove the game afterwards but that is somewhat irrelevant since the AI doesn’t judge you on your actions, just what you would do if you were in a Newcomb-type scenario.
“I am going to read you mind and if you believe in a decision theory that one-boxes in Newcomb’s Paradox I will leave you alone, but if you believe in any other decision theory I will kick you in the dick”
Sure, that’s possible. Assuming there are no Newcomb’s predictors in that universe, but only DK, rational agents believe in two-boxing. I am lost as to how it is related to your original point.
That just means the AI cares about a particular class of decision theories rather than a specific one like Dick Kick’em. I could re-run the same thought experiment but instead Dick Kick’em says:
”I am going to read you mind and if you believe in a decision theory that one-boxes in Newcomb’s Paradox I will leave you alone, but if you believe in any other decision theory I will kick you in the dick”
In this variation, Dick Kick’em would be judging the agent based on the exact same criterea that the AI in Newcomb’s problem is using. All I have done is remove the game afterwards but that is somewhat irrelevant since the AI doesn’t judge you on your actions, just what you would do if you were in a Newcomb-type scenario.
Sure, that’s possible. Assuming there are no Newcomb’s predictors in that universe, but only DK, rational agents believe in two-boxing. I am lost as to how it is related to your original point.