Actually, it seems that crashes it. thanks for the catch. I hadn’t tested copy_cat against itself and should have. Forbidding strategies to pass their opponent fixes that, but it does indicate my program may not be as stable as I thought. I’m going to have to spend a few more days checking for bugs sense I missed one that big. thanks eugman.
No problem. I’ve been thinking about it and one should be able to recursively prove if a strategy bottoms’s out. But that should fix it as long the user passes terminating strategies. A warning about everyman, if it goes against itself, it will run n^2 matches where n is the number of strategies tested.Time consumed is an interesting issue because if you take a long time it’s like being a defectbot against smartbots, you punish bots that try to simulate you but you lose points too.
I didn’t get into it earlier, but everyman’s a little more complicated. it runs through each match test one at a time from most likely to least, and checks after each match how much time it has left, if it starts to run out, it just leaves with the best strategy its figured out. By controlling how much time you pass in, strategies can avoid n^2 processing problems. Its why I thought it was so necessary to include even if it does give hints if strategies are being simulated or not. Everyman was built as a sort of upper bound. its one of the least efficient strategies one might implement.
Actually, it seems that crashes it. thanks for the catch. I hadn’t tested copy_cat against itself and should have. Forbidding strategies to pass their opponent fixes that, but it does indicate my program may not be as stable as I thought. I’m going to have to spend a few more days checking for bugs sense I missed one that big. thanks eugman.
No problem. I’ve been thinking about it and one should be able to recursively prove if a strategy bottoms’s out. But that should fix it as long the user passes terminating strategies. A warning about everyman, if it goes against itself, it will run n^2 matches where n is the number of strategies tested.Time consumed is an interesting issue because if you take a long time it’s like being a defectbot against smartbots, you punish bots that try to simulate you but you lose points too.
I didn’t get into it earlier, but everyman’s a little more complicated. it runs through each match test one at a time from most likely to least, and checks after each match how much time it has left, if it starts to run out, it just leaves with the best strategy its figured out. By controlling how much time you pass in, strategies can avoid n^2 processing problems. Its why I thought it was so necessary to include even if it does give hints if strategies are being simulated or not. Everyman was built as a sort of upper bound. its one of the least efficient strategies one might implement.