Um, they’re pseudocode. I’m not sure what your objection is...
What do they do?
My intuition is that they’ll both fail to deduce that the other cooperates, and thus output their default actions. However, I imagine that there could be rigged-up versions such that FairBot deduces AntiFairBot’s cooperation at the last possible second and thus cooperates, while AntiFairBot runs out of time and thus cooperates.
(Think about the other possibilities, including one of them deducing the other’s cooperation with deductive capacity to spare, and you’ll see that all other possibilities are inconsistent.)
Um, they’re pseudocode. I’m not sure what your objection is...
My intuition is that they’ll both fail to deduce that the other cooperates, and thus output their default actions. However, I imagine that there could be rigged-up versions such that FairBot deduces AntiFairBot’s cooperation at the last possible second and thus cooperates, while AntiFairBot runs out of time and thus cooperates.
(Think about the other possibilities, including one of them deducing the other’s cooperation with deductive capacity to spare, and you’ll see that all other possibilities are inconsistent.)