It seems to me there was some causal factor that caused the switch to flip to me (maybe it was reading about UDT or something), and I should be seeking to cause that same causal factor in other similar brains.
Indeed you can use causal pathways like culture to increase the chances of people deontologically deciding to cooperate, or of people using UDT, but the latter is only useful if UDT cooperates. According to UDT, to decide what to do, compare not the possible worlds conditional on “I decide to cooperate/defect”, but conditional on “UDT cooperates/defects”.
Of course CDT can’t be convinced in the moment that deciding to vote for your party changes the expected tallies by any more than one. But even CDT would agree that the CDT party loses against the UDT party, and that it should build UDT rather than CDT into its AI if that AI will be playing Prisoner’s Dilemmas against its copies.
It seems to me there was some causal factor that caused the switch to flip to me (maybe it was reading about UDT or something), and I should be seeking to cause that same causal factor in other similar brains.
Indeed you can use causal pathways like culture to increase the chances of people deontologically deciding to cooperate, or of people using UDT, but the latter is only useful if UDT cooperates. According to UDT, to decide what to do, compare not the possible worlds conditional on “I decide to cooperate/defect”, but conditional on “UDT cooperates/defects”.
Of course CDT can’t be convinced in the moment that deciding to vote for your party changes the expected tallies by any more than one. But even CDT would agree that the CDT party loses against the UDT party, and that it should build UDT rather than CDT into its AI if that AI will be playing Prisoner’s Dilemmas against its copies.
Ahh that makes sense.