However, tetronian2′s “time” command catches all exceptions, which means that if multiple timeouts are nested, you can get some pretty strange behaviour.
In the TB vs SMB scenario, here’s what happens:
The world runs TB vs SMB
TB runs SMB vs MB (timeout #1, 10000us)
SMB runs MB vs MB (timeout #2, 10000us)
Timeout #1 triggers an exception, which is caught by timeout #2 and handled, causing (3) to return Nothing.
Before the exception for timeout #2 can actually trigger, (2) finishes normally and kills the timeout thread; this means that (2) return Just Cooperate.
Since (2) returned Just Cooperate, TrollBot returns Defect.
Thank you very much for catching this and for explaining it in detail here and on the github page; I’m going to play around with your proposed solution today and brainstorm any other ways of dealing with this.
That does seem exploitable, if one can figure out exactly what’s happening here.
Yes, it’s an issue with how the “time” command is set up. Basically, timeouts are done via threads an exceptions; there’s a detailed description of the “timeout” function in System.Timeout here: http://chimera.labs.oreilly.com/books/1230000000929/ch09.html#sec_timeout
However, tetronian2′s “time” command catches all exceptions, which means that if multiple timeouts are nested, you can get some pretty strange behaviour.
In the TB vs SMB scenario, here’s what happens:
The world runs TB vs SMB
TB runs SMB vs MB (timeout #1, 10000us)
SMB runs MB vs MB (timeout #2, 10000us)
Timeout #1 triggers an exception, which is caught by timeout #2 and handled, causing (3) to return Nothing.
Before the exception for timeout #2 can actually trigger, (2) finishes normally and kills the timeout thread; this means that (2) return Just Cooperate.
Since (2) returned Just Cooperate, TrollBot returns Defect.
I’ve brought this up as an issue on the GitHub issues page here: https://github.com/pdtournament/pdtournament/issues/3
I’ve suggested a solution that fixes this problem, but unfortunately race conditions are ultimately still present.
Thank you very much for catching this and for explaining it in detail here and on the github page; I’m going to play around with your proposed solution today and brainstorm any other ways of dealing with this.