There is a lot more work on this point, not all of it focused on the point. What else, for example, would you call Robert Axelrod’s “Tit-for-Tat” than a “fast and frugal satisficing algorithm”? In fact, there has been enough other work on this and related points that I would not refer to as a counter-intuitive result.
There is a lot more work on this point, not all of it focused on the point. What else, for example, would you call Robert Axelrod’s “Tit-for-Tat” than a “fast and frugal satisficing algorithm”? In fact, there has been enough other work on this and related points that I would not refer to as a counter-intuitive result.
Tit-for-tat doesn’t win because it’s computationally efficient.
Now that would be a cool extension of Axelrod’s test: include a penalty per round or per pairing as a function of algorithm length.
Length of source code, or running time? (How the heck did English end up with the same word for measuring time and a 3D axis?)
I had been thinking source code length, such that it would correspond to Kolmogorov complexity. Both would actually work, testing different things.
And perhaps the English question makes more sense if we consider things with a fourth time dimension ;)