Both ways of simulating counterfactuals remove some info, either you change [Dave]’s prediction, or you stop it being correct. In the real world, the robot knows that Dave will correctly predict it, but it’s counterfactuals contain scenarios where [Dave] is wrong.
Suppose there were two identical robots, and the paths A and B were only wide enough for one robot. So 1A1B>2A>2B in all robots preference orderings. The robots predict that the other robot will take path Q, and so decides to take path R=/=Q. ( {Q,R}={A,B} ) The robots oscillate their decisions through the levels of simulation until the approximations become too crude. Both robots then take the same path, with the path they take depending on whether they had compute for an odd or even no. of simulation layers. They will do this even if they have a way to distinguish themselves, like flipping coins until one gets heads and the other doesn’t. (Assuming an epsilon cost to this method)
In general, CDT doesn’t work when being predicted.
Both ways of simulating counterfactuals remove some info, either you change [Dave]’s prediction, or you stop it being correct. In the real world, the robot knows that Dave will correctly predict it, but it’s counterfactuals contain scenarios where [Dave] is wrong.
Suppose there were two identical robots, and the paths A and B were only wide enough for one robot. So 1A1B>2A>2B in all robots preference orderings. The robots predict that the other robot will take path Q, and so decides to take path R=/=Q. ( {Q,R}={A,B} ) The robots oscillate their decisions through the levels of simulation until the approximations become too crude. Both robots then take the same path, with the path they take depending on whether they had compute for an odd or even no. of simulation layers. They will do this even if they have a way to distinguish themselves, like flipping coins until one gets heads and the other doesn’t. (Assuming an epsilon cost to this method)
In general, CDT doesn’t work when being predicted.