In part IV, can you explain more about what your examples prove?
You say FDT is motivated by an intuition in favor of one-boxing, but apparently this is false by your definition of Newcomb’s Problem. FDT was ultimately motivated by an intuition that it would win. It also seems based on intuitions regarding AI, if you read that post—specifically, that a robot programmed to use CDT would self-modify to use a more reflective decision theory if given the chance, because that choice gives it more utility. Your practical objection about humans may not be applicable to MIRI.
As far as your examples go, neither my actions nor my abstract decision procedure controls whether or not I’m Scottish. Therefore, one-boxing gives me less utility in the Scot-Box Problem and I should not do it.
Exception: Perhaps Scotland in this scenario is known for following FDT. Then FDT might in fact say to one-box (I’m not an expert) and this may well be the right answer.
In part IV, can you explain more about what your examples prove?
You say FDT is motivated by an intuition in favor of one-boxing, but apparently this is false by your definition of Newcomb’s Problem. FDT was ultimately motivated by an intuition that it would win. It also seems based on intuitions regarding AI, if you read that post—specifically, that a robot programmed to use CDT would self-modify to use a more reflective decision theory if given the chance, because that choice gives it more utility. Your practical objection about humans may not be applicable to MIRI.
As far as your examples go, neither my actions nor my abstract decision procedure controls whether or not I’m Scottish. Therefore, one-boxing gives me less utility in the Scot-Box Problem and I should not do it.
Exception: Perhaps Scotland in this scenario is known for following FDT. Then FDT might in fact say to one-box (I’m not an expert) and this may well be the right answer.