Omega would know your non-cooperation, but other people you may be able to trick. And TDT orders cooperation only in the cases of those you wouldn’t be able to trick.
But then people you would (otherwise) be able to trick have the incentive to defect, making it harder to trick them, making (D,D) more likely than (C,C), which is bad for you. Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Yes, it can be a bad idea—I’m just saying TDT doesn’t say it’s always a bad idea.
TDT can’t reason about such things, it gets its causal graphs by magic, and this reasoning involves details of construction of the causal graphs (it can still make the right decisions, provided the magic comes through). UDT is closer to the mark, but we don’t have a good picture of how that works. See in particular this thought experiment.
But then people you would (otherwise) be able to trick have the incentive to defect, making it harder to trick them, making (D,D) more likely than (C,C), which is bad for you. Having an intention to trick those you can trick can itself be a bad idea (for some categories of trickable opponents that respond to your having this intention).
Yes, it can be a bad idea—I’m just saying TDT doesn’t say it’s always a bad idea.
(DefectBot is sufficient to demostrate it’s not always a bad idea to defect. In other cases, it can be much more subtle.)
TDT can’t reason about such things, it gets its causal graphs by magic, and this reasoning involves details of construction of the causal graphs (it can still make the right decisions, provided the magic comes through). UDT is closer to the mark, but we don’t have a good picture of how that works. See in particular this thought experiment.