TDT basically takes into consideration the consequences of itself—not just each particular action it endorses, but the consequences of you following a specific logic towards that action, and the consequences of other people knowing that you would follow such a logic.
It’s a consequentialist theory because it seeks to maximize the utility of consequent states of the world—it doesn’t have deontological instructions like “cooperate because it’s the nice thing to do”—it says things like “cooperate if and only if the other guy would be able to predict and punish your non-cooperation, because that leads to an optimal-utility state for you”
All that having been said, I think some people are misusing TDT when they say people would know your non-cooperation. Omega would know your non-cooperation, but other people you may be able to trick. And TDT orders cooperation only in the cases of those you wouldn’t be able to trick.
Omega would know your non-cooperation, but other people you may be able to trick. And TDT orders cooperation only in the cases of those you wouldn’t be able to trick.
But then people you would (otherwise) be able to trick have the incentive to defect, making it harder to trick them, making (D,D) more likely than (C,C), which is bad for you. Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Yes, it can be a bad idea—I’m just saying TDT doesn’t say it’s always a bad idea.
TDT can’t reason about such things, it gets its causal graphs by magic, and this reasoning involves details of construction of the causal graphs (it can still make the right decisions, provided the magic comes through). UDT is closer to the mark, but we don’t have a good picture of how that works. See in particular this thought experiment.
TDT basically takes into consideration the consequences of itself—not just each particular action it endorses, but the consequences of you following a specific logic towards that action, and the consequences of other people knowing that you would follow such a logic.
It’s a consequentialist theory because it seeks to maximize the utility of consequent states of the world—it doesn’t have deontological instructions like “cooperate because it’s the nice thing to do”—it says things like “cooperate if and only if the other guy would be able to predict and punish your non-cooperation, because that leads to an optimal-utility state for you”
All that having been said, I think some people are misusing TDT when they say people would know your non-cooperation. Omega would know your non-cooperation, but other people you may be able to trick. And TDT orders cooperation only in the cases of those you wouldn’t be able to trick.
But then people you would (otherwise) be able to trick have the incentive to defect, making it harder to trick them, making (D,D) more likely than (C,C), which is bad for you. Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Yes, it can be a bad idea—I’m just saying TDT doesn’t say it’s always a bad idea.
(DefectBot is sufficient to demostrate it’s not always a bad idea to defect. In other cases, it can be much more subtle.)
TDT can’t reason about such things, it gets its causal graphs by magic, and this reasoning involves details of construction of the causal graphs (it can still make the right decisions, provided the magic comes through). UDT is closer to the mark, but we don’t have a good picture of how that works. See in particular this thought experiment.