Well, in that case Omega’s prediction and your decision (one-boxing or two-boxing) aren’t subjunctively dependent on the same function. And this kind of dependence is key in FDT’s decision to one-box! Without it, FDT recommends two-boxing, like CDT.
In Newcombe’s problem, Omega is a perfect predictor, not just a very good one. Subjunctive dependence is necessarily also perfect in that case.
If Omega is imperfect in various ways, their predictions might be partially or not at all subjunctively dependent upon yours and below some point on this scale FDT will start recommending two-boxing, as it should.
Omega can still be a nearly perfect predictor by some measures while still having zero subjunctive dependence. Conversely, even a comparatively poor predictor can still have enough subjunctive dependence that you should one-box (it only takes 0.1%).
In reality you have other actions available than just “one box” or “two box”. You may be able to change things about yourself so that Omega will be more likely to predict you to one-box, which may be worthwhile depending on the cost of whatever actions you take. Increasing your chance of an extra million dollars is probably worth some effort.
While technically within the scope of decision theory, any such actions are likely to be dependent upon fiddly details of Omega’s prediction processes and too annoying to model in toy problems. However, the existence of such actions is relevant to related fields in psychology. A great deal of human labour seems to go into efforts to shift others’ predictions about their behaviour. Some of these actions might actually change their future behaviour in a direction that aligns with the changed predictions to some extent (even if unintended), while others do not.
In Newcombe’s problem, Omega is a perfect predictor, not just a very good one. Subjunctive dependence is necessarily also perfect in that case.
If Omega is imperfect in various ways, their predictions might be partially or not at all subjunctively dependent upon yours and below some point on this scale FDT will start recommending two-boxing, as it should.
Omega can still be a nearly perfect predictor by some measures while still having zero subjunctive dependence. Conversely, even a comparatively poor predictor can still have enough subjunctive dependence that you should one-box (it only takes 0.1%).
In reality you have other actions available than just “one box” or “two box”. You may be able to change things about yourself so that Omega will be more likely to predict you to one-box, which may be worthwhile depending on the cost of whatever actions you take. Increasing your chance of an extra million dollars is probably worth some effort.
While technically within the scope of decision theory, any such actions are likely to be dependent upon fiddly details of Omega’s prediction processes and too annoying to model in toy problems. However, the existence of such actions is relevant to related fields in psychology. A great deal of human labour seems to go into efforts to shift others’ predictions about their behaviour. Some of these actions might actually change their future behaviour in a direction that aligns with the changed predictions to some extent (even if unintended), while others do not.