The trouble is, of course, that if you both predictably (say, with 98% probability) switch to defecting after one sees ‘A’ and the other sees ‘B’, you could just as easily (following some flavor of TDT) predictably cooperate.
This issue is basically the oversimplification within TDT where it treats algorithms as atomic causes of actions, rather than as a lossy abstraction from complex physical states. This is a very difficult AI problem that I’m pretending is solved for the purposes of my posts.
The trouble is, of course, that if you both predictably (say, with 98% probability) switch to defecting after one sees ‘A’ and the other sees ‘B’, you could just as easily (following some flavor of TDT) predictably cooperate.
This issue is basically the oversimplification within TDT where it treats algorithms as atomic causes of actions, rather than as a lossy abstraction from complex physical states. This is a very difficult AI problem that I’m pretending is solved for the purposes of my posts.