the agent does not regard the prediction outcome as contingent on the agent’s computation.
In trying to explain why the simulation ought to show that the prediction outcome is in fact contingent, I realized that I was confused, so I’m going to disregard what I previously tried to think, and start over.
The results are messy and full of wrong ideas; I suggest skimming to get the gist of it. That is: the following text is a noisy signal of what I’m trying to think, so don’t read the details too closely.
--
I may have to reconsider whether I properly grokked the sequence on free will and determinism.
To the extent that the agent has not yet made up its mind as to whether it will one-box or two-box, the simulation ought to reflect that.
The effect of possessing the simulation should be that the agent has (1) unusually high confidence in the predictor’s accuracy, and (2) exceptional luminosity, which ought to be assumed of decision-theoretic agents anyway.
Being luminous about my decision-making process does not mean that I have made up my mind. Similarly, being able to run a program that contains a highly-accurate high-level description of myself does not mean that I already know what decision I’m going to eventually make.
All this adds up to circumstances that, if anything, more strongly support one-boxing than vanilla Newcomb does.
--
Alternatively, if we accept as given that the model’s mind is made up when the agent runs the simulation, we must equally infer that the agent’s decision is likewise determined. From the agent’s point of view, this means that its key choice occurred earlier, in which case we can just ask about its decision back then instead of now, and we again have a standard Newcomb/Parfit situation.
--
In short, two-boxing requires the agent to be stupid about causality and determinism, so the question is fundamentally nothing new.
It seems to me that this problem assumes that the predictor both does and does not predict correctly.
When determining the predictor’s actions, we assume that it forsees the agent’s two-boxing.
When determining the agent’s actions, we assume that the simulated predictor behaves the same regardless of the agent’s decision.
The question thus seems to contradict itself.
In this problem the predictor predicts correctly. Can you explain why you think it predicts incorrectly?
In trying to explain why the simulation ought to show that the prediction outcome is in fact contingent, I realized that I was confused, so I’m going to disregard what I previously tried to think, and start over.
The results are messy and full of wrong ideas; I suggest skimming to get the gist of it. That is: the following text is a noisy signal of what I’m trying to think, so don’t read the details too closely.
--
I may have to reconsider whether I properly grokked the sequence on free will and determinism.
To the extent that the agent has not yet made up its mind as to whether it will one-box or two-box, the simulation ought to reflect that.
The effect of possessing the simulation should be that the agent has (1) unusually high confidence in the predictor’s accuracy, and (2) exceptional luminosity, which ought to be assumed of decision-theoretic agents anyway.
Being luminous about my decision-making process does not mean that I have made up my mind. Similarly, being able to run a program that contains a highly-accurate high-level description of myself does not mean that I already know what decision I’m going to eventually make.
All this adds up to circumstances that, if anything, more strongly support one-boxing than vanilla Newcomb does.
--
Alternatively, if we accept as given that the model’s mind is made up when the agent runs the simulation, we must equally infer that the agent’s decision is likewise determined. From the agent’s point of view, this means that its key choice occurred earlier, in which case we can just ask about its decision back then instead of now, and we again have a standard Newcomb/Parfit situation.
--
In short, two-boxing requires the agent to be stupid about causality and determinism, so the question is fundamentally nothing new.