So “play whatever I think X will play” does count as a strategy, but “play whatever X plays” does not count as a strategy because Y can’t actually implement it.
It can’t implement “play whatever I think X will play” either, because it doesn’t know what X will play.
In one statement, if we are talking about ADT-like PD (the model of TDT in this post appears to be more complicated), Y could be said to choose the action such that provability of Y choosing that action implies X’s choosing a good matching action. So Y doesn’t act depending on what X does or what Y thinks X does etc., Y acts depending on what X can be inferred to do if we additionally assume that Y is doing a certain thing, and the thing we additionally assume Y to be doing is a specific action, not a strategy of responding to X’s source code, or a strategy of responding to X’s action. If you describe X’s algorithm the same way, you can see that the additional assumption of Y’s action is not what X uses in making its decision, for it similarly makes an additional assumption of its own (X’s) action and then looks what can be inferred about Y’s action (and not Y’s “strategy”).
Y acts depending on what X can be inferred to do if we additionally assume that Y is doing a certain thing, and the thing we additionally assume Y to be doing is a specific action, not a strategy of responding to X’s source code, or a strategy of responding to X’s action.
Can you write the “cooperate iff I cooperate iff they cooperate … ” bot this way? I thought the strength of TDT was that it allowed that bot.
Can you write the “cooperate iff I cooperate iff they cooperate … ” bot this way?
This can be unpacked as an algorithm that searches for a proof of the statement “If I cooperate, then my opponent also cooperates; if I defect, then my opponent also defects”, and if it finds its proof, it cooperates. Under certain conditions, two players running something like this algorithm will cooperate. As you can see, agent’s decision here depends not on the opponent’s decision, but on the opponent’s decision’s dependence on your decision (and not dependence on the dependence of your decision on the opponent’s decision, etc.).
Okay. I think that fits with my view: so long as it’s possible to go from X’s strategy and Y’s strategy to an outcome, then we can build a table of strategy-strategy-outcome triplets, and do analysis on that. (I built an example over here.) What I’m taking from this subthread is that the word “strategy” needs to have a particular meaning to be accurate, and so I need to be more careful when I use it so that it’s clear that I’m conforming to that meaning.
It can’t implement “play whatever I think X will play” either, because it doesn’t know what X will play.
In one statement, if we are talking about ADT-like PD (the model of TDT in this post appears to be more complicated), Y could be said to choose the action such that provability of Y choosing that action implies X’s choosing a good matching action. So Y doesn’t act depending on what X does or what Y thinks X does etc., Y acts depending on what X can be inferred to do if we additionally assume that Y is doing a certain thing, and the thing we additionally assume Y to be doing is a specific action, not a strategy of responding to X’s source code, or a strategy of responding to X’s action. If you describe X’s algorithm the same way, you can see that the additional assumption of Y’s action is not what X uses in making its decision, for it similarly makes an additional assumption of its own (X’s) action and then looks what can be inferred about Y’s action (and not Y’s “strategy”).
Can you write the “cooperate iff I cooperate iff they cooperate … ” bot this way? I thought the strength of TDT was that it allowed that bot.
This can be unpacked as an algorithm that searches for a proof of the statement “If I cooperate, then my opponent also cooperates; if I defect, then my opponent also defects”, and if it finds its proof, it cooperates. Under certain conditions, two players running something like this algorithm will cooperate. As you can see, agent’s decision here depends not on the opponent’s decision, but on the opponent’s decision’s dependence on your decision (and not dependence on the dependence of your decision on the opponent’s decision, etc.).
Okay. I think that fits with my view: so long as it’s possible to go from X’s strategy and Y’s strategy to an outcome, then we can build a table of strategy-strategy-outcome triplets, and do analysis on that. (I built an example over here.) What I’m taking from this subthread is that the word “strategy” needs to have a particular meaning to be accurate, and so I need to be more careful when I use it so that it’s clear that I’m conforming to that meaning.