I’m not an advocate (or detractor) of those decision theories, but the answer that immediately appears to me is to question what drew this particular scenario to your attention out of all possible scenarios. Abstractly, the scenario is that in some possible world, someone doing X prevented disaster Y. For which X and Y should I therefore do X, even if disaster Y cannot occur in this world?
Somehow, you obtained the bits necessary to pull from possibility space the instance X = build a golden statue of Genghis Khan and Y = Genghis Khan in another world stops making war. What drew that instance to your attention, rather than, for example, Y’ = Genghis Khan, inspired by this monument, wages war even more mightily? Or Y” = to get all this gold, the monument-builder himself must conquer the world? And so on.
It’s like the type of Pascal’s Mugging scenario that gives no reason to expect that particular consequence to result from the action more than any other.
A more fruitful question is “should I be the sort of person who does X-ish actions in Y-ish situations?” for various values of X and Y. Here, TDT etc. may give justifications for e.g. cooperation in PD, Parfit’s hitchhiker, etc., that conventional decision theories have problems with.
I’m not an advocate (or detractor) of those decision theories, but the answer that immediately appears to me is to question what drew this particular scenario to your attention out of all possible scenarios. Abstractly, the scenario is that in some possible world, someone doing X prevented disaster Y. For which X and Y should I therefore do X, even if disaster Y cannot occur in this world?
Somehow, you obtained the bits necessary to pull from possibility space the instance X = build a golden statue of Genghis Khan and Y = Genghis Khan in another world stops making war. What drew that instance to your attention, rather than, for example, Y’ = Genghis Khan, inspired by this monument, wages war even more mightily? Or Y” = to get all this gold, the monument-builder himself must conquer the world? And so on.
It’s like the type of Pascal’s Mugging scenario that gives no reason to expect that particular consequence to result from the action more than any other.
A more fruitful question is “should I be the sort of person who does X-ish actions in Y-ish situations?” for various values of X and Y. Here, TDT etc. may give justifications for e.g. cooperation in PD, Parfit’s hitchhiker, etc., that conventional decision theories have problems with.