I’m not sure this scenario enlightens me. It seems to be about available information rather than deontologism vs consequentialism. From the way you describe it, both the deontologist and the consequentialist will murder Hitler if they know he’s going to become Hitler, and won’t if they don’t.
That seems like a fairly useless part of consequential theory. In particular, when retrospecting about one’s previous actions, a consequentialist should give more weight to the argument “yes, he turned out to become Hitler, but I didn’t know that, and the prior probability of the person who took my parking space being Hitler is so low I would not have been justified in stabbing him for that reason” than “oh no, I’ve failed to stab Hitler”. It’s just a more productive thing to do, given that the next person who takes the consequentialist’s parking space is probably not Stalin.
Real-life morality is tricky. But when playing a video game, I am a points consequentialist: I believe that the right thing to do in the video game is that which maximizes the amount of points I get at the end.
Suppose one of my options is randomly chosen to lead to losing the game. I analyze the options and choose the one that has the lowest probability of being chosen. Turns out, I was unlucky and lost the game. Does that make my choice any less the right one? I don’t believe that it does.
I’m not sure this scenario enlightens me. It seems to be about available information rather than deontologism vs consequentialism. From the way you describe it, both the deontologist and the consequentialist will murder Hitler if they know he’s going to become Hitler, and won’t if they don’t.
The consequentialist will not in fact kill Hitler if they don’t know he’s Hitler, but it’s part of their theory that they should.
That seems like a fairly useless part of consequential theory. In particular, when retrospecting about one’s previous actions, a consequentialist should give more weight to the argument “yes, he turned out to become Hitler, but I didn’t know that, and the prior probability of the person who took my parking space being Hitler is so low I would not have been justified in stabbing him for that reason” than “oh no, I’ve failed to stab Hitler”. It’s just a more productive thing to do, given that the next person who takes the consequentialist’s parking space is probably not Stalin.
Real-life morality is tricky. But when playing a video game, I am a points consequentialist: I believe that the right thing to do in the video game is that which maximizes the amount of points I get at the end.
Suppose one of my options is randomly chosen to lead to losing the game. I analyze the options and choose the one that has the lowest probability of being chosen. Turns out, I was unlucky and lost the game. Does that make my choice any less the right one? I don’t believe that it does.
Same for the consequentialist, no?