Well, in Newcomb’s problem it’s primarily a question of how good the predictor is, not how close the duplicate is. I think FDT is well-defined in cases with an (approximately) perfect predictor, and also in cases with (very nearly) exact duplicates, but much less so in other cases.
(I think that it also makes sense to talk about FDT in cases where a perfect predictor randomises its answers x% of the time, so you know that there’s a very robust (1-x/2)% probability it’s correct. But then once we start talking about predictors that are nearer the human level, or evidence that’s more like statistical correlations, then it feels like we’re in tricky territory. Probably “non-exact duplicates in a prisoner’s dilemma” is a more central example of the problem I’m talking about; and even then it feels more robust to me than Eliezer’s applications of expected utility theory to predict big neural networks.)
Well, in Newcomb’s problem it’s primarily a question of how good the predictor is, not how close the duplicate is. I think FDT is well-defined in cases with an (approximately) perfect predictor, and also in cases with (very nearly) exact duplicates, but much less so in other cases.
(I think that it also makes sense to talk about FDT in cases where a perfect predictor randomises its answers x% of the time, so you know that there’s a very robust (1-x/2)% probability it’s correct. But then once we start talking about predictors that are nearer the human level, or evidence that’s more like statistical correlations, then it feels like we’re in tricky territory. Probably “non-exact duplicates in a prisoner’s dilemma” is a more central example of the problem I’m talking about; and even then it feels more robust to me than Eliezer’s applications of expected utility theory to predict big neural networks.)