I would like advocates of TDT, UDT, etc, to comment on the following scenario.
Suppose I think of a possible world where there is a version of Genghis Khan who thinks of this version of me. Then I imagine Genghis imagining my responses to his possible actions. Finally I imagine him agreeing to not kill everyone in the next country he invades, if I commit to building a thirty-meter golden statue of him, in my world. Then I go and build the statue, feeling like a great humanitarian because I saved some lives in another possible world.
My questions are: Is this crazy? If so, why is it crazy? And, is there an example of similar reasoning that isn’t crazy?
And, is there an example of similar reasoning that isn’t crazy?
I think one needs to significantly abstract this example to understand the reasoning at human levels. (EDIT TO ADD: And I also think your usage of the word ‘imagine’ is confusing because it connotates ‘making things up’ instead of ‘attempt to accurately model in your mind’.)
E.g. Let’s say you have made a habit of providing a helping hand to strangers. One day you learn that Genghis Khan, in a different time and a different continent, put an end to his butchering because he saw people helping strangers and suddenly took seriously this idea, and this made him reevaluate e.g. his cynicism towards humanity, and whether brutality truly provides happiness.
In this sense a part of you, a part of your decision process, the kindness-to-strangers part is responsible for stopping Genghis Khan. Other parts of you (your memories, your sense of identity, your personal history) aren’t. Nothing “recognizably” belonging strictly to you, but part of you is ‘responsible’ nonetheless.
--
Or here’s a different example, a more scientifictional one. An alien informs the human population that the next day, they’ll select at random an adult human to observe secretly for a day from the whole human population. That person will not have to do anything special, just clap their hands once during the day. If they do that the earth will be safe, if they don’t clap their hands during the day at all, the earth will be doomed.
Next day, three billion people clap their hands, just to be on the safe side. Three billion other people don’t—the chance that they’ll be the “one chosen” is only one in six billion afterall, close to nothing.
The aliens choose Alice. Alice happened to not clap. The earth is destroyed.
My moral intuition tells me that the three billion people who chose not to clap share equally in the responsibility for the Earth’s destruction; Alice who got randomly selected didn’t decide anything differently from any of the rest of them and therefore is no more “responsible” than any of them in a timeless sense; since her decision process was identical to those other three billion non-clappers, by my logic and moral intuition Alice shares the responsibilty equally with the other non-clappers, even though causally only she caused the destruction of the earth, and the other 2,999,999 harmed noone.
Likewise if the aliens chose Bob and Bob was a clapper, there’s no need to treat Bob as a hero that saved mankind anymore than the other 2,999,999 clappers did. The part that determined the saving of the earth was equally distributed in them; the selection of Bob in particular is random and irrelevant in comparison.
The “probability” of the imagined world is low, so the opportunity cost of this action makes it wrong. If there was a world fitting your description that had significant “probability” (for example, if you deduced that a past random event turning out differently would likely lead to the situation as you describe it), it would be a plausibly correct action to take.
(The unclear point is what contributes to a world’s “probability”; presumably, arbitrary stipulations drive it down, so most thought experiments are morally irrelevant.)
I’m not an advocate (or detractor) of those decision theories, but the answer that immediately appears to me is to question what drew this particular scenario to your attention out of all possible scenarios. Abstractly, the scenario is that in some possible world, someone doing X prevented disaster Y. For which X and Y should I therefore do X, even if disaster Y cannot occur in this world?
Somehow, you obtained the bits necessary to pull from possibility space the instance X = build a golden statue of Genghis Khan and Y = Genghis Khan in another world stops making war. What drew that instance to your attention, rather than, for example, Y’ = Genghis Khan, inspired by this monument, wages war even more mightily? Or Y” = to get all this gold, the monument-builder himself must conquer the world? And so on.
It’s like the type of Pascal’s Mugging scenario that gives no reason to expect that particular consequence to result from the action more than any other.
A more fruitful question is “should I be the sort of person who does X-ish actions in Y-ish situations?” for various values of X and Y. Here, TDT etc. may give justifications for e.g. cooperation in PD, Parfit’s hitchhiker, etc., that conventional decision theories have problems with.
I would like advocates of TDT, UDT, etc, to comment on the following scenario.
Suppose I think of a possible world where there is a version of Genghis Khan who thinks of this version of me. Then I imagine Genghis imagining my responses to his possible actions. Finally I imagine him agreeing to not kill everyone in the next country he invades, if I commit to building a thirty-meter golden statue of him, in my world. Then I go and build the statue, feeling like a great humanitarian because I saved some lives in another possible world.
My questions are: Is this crazy? If so, why is it crazy? And, is there an example of similar reasoning that isn’t crazy?
I think one needs to significantly abstract this example to understand the reasoning at human levels. (EDIT TO ADD: And I also think your usage of the word ‘imagine’ is confusing because it connotates ‘making things up’ instead of ‘attempt to accurately model in your mind’.)
E.g. Let’s say you have made a habit of providing a helping hand to strangers. One day you learn that Genghis Khan, in a different time and a different continent, put an end to his butchering because he saw people helping strangers and suddenly took seriously this idea, and this made him reevaluate e.g. his cynicism towards humanity, and whether brutality truly provides happiness.
In this sense a part of you, a part of your decision process, the kindness-to-strangers part is responsible for stopping Genghis Khan. Other parts of you (your memories, your sense of identity, your personal history) aren’t. Nothing “recognizably” belonging strictly to you, but part of you is ‘responsible’ nonetheless.
--
Or here’s a different example, a more scientifictional one. An alien informs the human population that the next day, they’ll select at random an adult human to observe secretly for a day from the whole human population. That person will not have to do anything special, just clap their hands once during the day. If they do that the earth will be safe, if they don’t clap their hands during the day at all, the earth will be doomed. Next day, three billion people clap their hands, just to be on the safe side. Three billion other people don’t—the chance that they’ll be the “one chosen” is only one in six billion afterall, close to nothing.
The aliens choose Alice. Alice happened to not clap. The earth is destroyed. My moral intuition tells me that the three billion people who chose not to clap share equally in the responsibility for the Earth’s destruction; Alice who got randomly selected didn’t decide anything differently from any of the rest of them and therefore is no more “responsible” than any of them in a timeless sense; since her decision process was identical to those other three billion non-clappers, by my logic and moral intuition Alice shares the responsibilty equally with the other non-clappers, even though causally only she caused the destruction of the earth, and the other 2,999,999 harmed noone.
Likewise if the aliens chose Bob and Bob was a clapper, there’s no need to treat Bob as a hero that saved mankind anymore than the other 2,999,999 clappers did. The part that determined the saving of the earth was equally distributed in them; the selection of Bob in particular is random and irrelevant in comparison.
The “probability” of the imagined world is low, so the opportunity cost of this action makes it wrong. If there was a world fitting your description that had significant “probability” (for example, if you deduced that a past random event turning out differently would likely lead to the situation as you describe it), it would be a plausibly correct action to take.
(The unclear point is what contributes to a world’s “probability”; presumably, arbitrary stipulations drive it down, so most thought experiments are morally irrelevant.)
I’m not an advocate (or detractor) of those decision theories, but the answer that immediately appears to me is to question what drew this particular scenario to your attention out of all possible scenarios. Abstractly, the scenario is that in some possible world, someone doing X prevented disaster Y. For which X and Y should I therefore do X, even if disaster Y cannot occur in this world?
Somehow, you obtained the bits necessary to pull from possibility space the instance X = build a golden statue of Genghis Khan and Y = Genghis Khan in another world stops making war. What drew that instance to your attention, rather than, for example, Y’ = Genghis Khan, inspired by this monument, wages war even more mightily? Or Y” = to get all this gold, the monument-builder himself must conquer the world? And so on.
It’s like the type of Pascal’s Mugging scenario that gives no reason to expect that particular consequence to result from the action more than any other.
A more fruitful question is “should I be the sort of person who does X-ish actions in Y-ish situations?” for various values of X and Y. Here, TDT etc. may give justifications for e.g. cooperation in PD, Parfit’s hitchhiker, etc., that conventional decision theories have problems with.