They don’t make those decisions with “paranormally assured 100% knowledge” of my decision theory. That’s the “extreme that doesn’t actually happen”. And this is why I won’t be adopting any new paradigm of decision theory unless I can start in the middle, with situations that do happen, and move gradually towards the extremes, and see the desirability or necessity of the new paradigm that way.
As has been said many times (at least by me, definitely by many others), you don’t need 100% accuracy for the argument to hold. If Parfit’s mindreader is only 75% accurate, that still justifies choosing the pay/ cooperate / one-box option. One-boxing on newcomblike problems is simply what you get when you have a decision theory that wins in these reasonable cases, and which is continuous—and then take the limit as all the predicate variables go to what they need to be to make it Newcomb’s problem (such as making the predictor 100% accurate).
It doesn’t matter that you’ll never be in Newcomb’s problem. It doesn’t matter that you’ll never be in an epistemic state where you can justifiably believe that you are. It’s just an implication of having a good decision theory.
Part of my concern is that I’ll end up wasting time, chasing my tail in an attempt to deal with fictitious problems, when I could be working on real problems. I’m still undecided about the merits of acausal decision theories, as a way of dealing with the thought experiments, but I am really skeptical that they are relevant to anything practical, like coordination problems.
Really? People never decide how to treat you based on estimations of your decision theory (aka your “character”)?
They don’t make those decisions with “paranormally assured 100% knowledge” of my decision theory. That’s the “extreme that doesn’t actually happen”. And this is why I won’t be adopting any new paradigm of decision theory unless I can start in the middle, with situations that do happen, and move gradually towards the extremes, and see the desirability or necessity of the new paradigm that way.
As has been said many times (at least by me, definitely by many others), you don’t need 100% accuracy for the argument to hold. If Parfit’s mindreader is only 75% accurate, that still justifies choosing the pay/ cooperate / one-box option. One-boxing on newcomblike problems is simply what you get when you have a decision theory that wins in these reasonable cases, and which is continuous—and then take the limit as all the predicate variables go to what they need to be to make it Newcomb’s problem (such as making the predictor 100% accurate).
If it helps, think of the belief in one-boxing as belief in the implied optimal.
It doesn’t matter that you’ll never be in Newcomb’s problem. It doesn’t matter that you’ll never be in an epistemic state where you can justifiably believe that you are. It’s just an implication of having a good decision theory.
Part of my concern is that I’ll end up wasting time, chasing my tail in an attempt to deal with fictitious problems, when I could be working on real problems. I’m still undecided about the merits of acausal decision theories, as a way of dealing with the thought experiments, but I am really skeptical that they are relevant to anything practical, like coordination problems.