One good way to interpret code is to run the code with “eval”, which many submitted bots did. This method has no problems with the examples you gave. One important place it breaks down is with bots that behave randomly. In that case a robot may, by chance, be simulated to cooperate and defect in whatever sequence would make it seem worth cooperating with even if it actually ends up defecting. This, combined with a little luck, made the random bots come out ahead. There are ways to get around this problem, and a few bots did so, but they still didn’t do better random bots because they had less potential for exploitation in this particular pool of entrants.
I agree, my fellow top-ranking-non-source-ignoring player. Saying “nobody could do any better than randomness in this tournament” is strictly true but a bit misleading; the tiny, defect-happy pool with almost 20% random players (the top 3 and also G; he just obfuscated his somewhat) didn’t provide a very favorable structure for more intelligent bots to intelligently navigate, but there was still certainly some navigation.
I’m pretty pleased with how my bot performed; it never got deterministically CD’d and most of its nonrandom mutual defections were against bots who had some unusual trigger condition for defecting based on source composition, not performance, or had very confused performance triggers (e.g. O—why would you want to play your opponent’s anti-defectbot move when you determine they cooperate with cooperatebot?). Some of its mutual defections were certainly due to my detect-size-changes exploit, but so were its many DCs.
One good way to interpret code is to run the code with “eval”, which many submitted bots did. This method has no problems with the examples you gave. One important place it breaks down is with bots that behave randomly. In that case a robot may, by chance, be simulated to cooperate and defect in whatever sequence would make it seem worth cooperating with even if it actually ends up defecting. This, combined with a little luck, made the random bots come out ahead. There are ways to get around this problem, and a few bots did so, but they still didn’t do better random bots because they had less potential for exploitation in this particular pool of entrants.
I agree, my fellow top-ranking-non-source-ignoring player. Saying “nobody could do any better than randomness in this tournament” is strictly true but a bit misleading; the tiny, defect-happy pool with almost 20% random players (the top 3 and also G; he just obfuscated his somewhat) didn’t provide a very favorable structure for more intelligent bots to intelligently navigate, but there was still certainly some navigation.
I’m pretty pleased with how my bot performed; it never got deterministically CD’d and most of its nonrandom mutual defections were against bots who had some unusual trigger condition for defecting based on source composition, not performance, or had very confused performance triggers (e.g. O—why would you want to play your opponent’s anti-defectbot move when you determine they cooperate with cooperatebot?). Some of its mutual defections were certainly due to my detect-size-changes exploit, but so were its many DCs.