I cared enough to think and enter, but not to program.
I designed a simulator, but was told it wouldn’t be coded for me, so that was out.
So instead, I wrote this:
Until symmetry breaks, if the round # is 1, is 3 or is even, play sequence 23322232323322233 and then repeat 22222223 until symmetry breaks. If round # is odd and 5 or more, the reverse, except repeating 22222223 at the end.
Once it breaks, alternate 2 and 3 for 4 turns.
Then, if the last turn added to 5, keep doing that until they don’t.
Once they don’t...
If they’ve always played 0 or less, play 5.
If they’ve always played 1 or less, play 4.
If they’ve always played 2 or less, play 3.
Otherwise, depending on round number:
Rounds 1-50: Keep alternating until turn t+10. After that, if last turn added to 5, alternate 2 and 3. Otherwise, check their average score per round after symmetry. If it’s 2.5 or lower, play 2, otherwise play 3.
Rounds 51-100: Same as above, except you also always play 3 if their score is 5 or more higher than yours.
Rounds 101+: Same as above, except you also always play 3 if their score is higher than yours.
(We could improve by adding more logic to properly exploit in strange situations, but we don’t care enough so we won’t.)
That’s it. Keep it simple. Still call it BendBot I guess.
The intention here was pretty basic. Endgame behavior varies by round to get stingier if anyone tries something, to grab early pool share without being exploited later.
The big thing is that this bot is deterministic. I intentionally avoid calling the random function by choosing a semi-random set of 2s and 3s, on the theory that it’s unlikely anyone else would choose an identical sequence, and if I meet myself I get the 2.5 anyway.
If they are not simulating or checking code, it won’t matter.
If they are looking at all, then my not loading their code and not being random tells them the water’s fine, take a look, see what’s going on, and we can cooperate fully—you can see what I’m starting with, and we can get 2.5 each without incident. I’m sad that some people thought that more than two parenthesis was high risk to simulate/examine—I thought that the obvious thing to do was check to see if someone ever loads code or uses a random function, and if you don’t do either, you should be safe.
So the thought was, many of the best bots would be simulator bots and I’d get full cooperation from them, whereas when they faced each other, they’d have to do some random matching to cooperate, so I’d have an edge there, and I’d do reasonably well against anything else that went late unless some alliance was afoot.
Turns out an alliance is afoot after all, but I certainly didn’t care enough to worry about that. Let them come, and let the backstabbers profit, I say.
I was told that I had by far the most complicated non-coded entry even then, and that my endgame logic was being replaced with randomly 50% 2, 50% 3. I was asked, submit as-is, fix it, or withdraw?
That modification definitely didn’t work, and the code that was written was not something I felt OK touching. So I explained why, and suggested it be replaced with this:
If (last round added to 5 or less) play whatever they played last.
Else If (their score > my score and round > 5) play 3.
Else Play 2.
I figured that was one extra line of code and should take like 2 minutes tops, and if that was ‘too complex’ then that was fine, I’d sit out.
So basically, let myself get exploited very early since there would likely be at least one all-3s in the mix but all such things would swiftly lose, then shift to hardcore mode a little faster to keep it simple.
I didn’t get a reply to that, so I don’t know if my entry is in or not. I hope it is, but either way, good luck everyone.
I’m sad that some people thought that more than two parenthesis was high risk to simulate/examine—I thought that the obvious thing to do was check to see if someone ever loads code or uses a random function, and if you don’t do either, you should be safe.
We’ll see how many people submitted simulators braver than mine, but simulators being timid seems like a natural consequence of the rules allowing you to nuke your opponent if you find out that you’re in a simulation, and a common enough perception that simulators might have enough of an advantage that they should be eliminated.
Static analysis is not very useful if the opponent’s code is at all obfuscated, which is likely is if your opponent is looking to nuke simulators. Does your static analysis catch the code getattr(__builtins__, ‘e’ + ‘x’ + ‘e’ + ‘c’)(base64.decode(God knows what)) ? Or however many dozens of other ways there are to do something like that?
The tournament might look significantly different if the rules were slanted in the simulator’s favor, maybe if you just had to avoid infinite simulation loops and keep runtime reasonable, and the worst the opponent was allowed to do if they found they were in a simulation was to play BullyBot in order to extort you or play randomly to make the simulation useless. The iterated prisoner’s dilemma with shared source code tournament a few years ago had a lot of simulators, so I assume their rules were more friendly to simulators.
I do think it would be hard to obfuscate in a way that wasn’t fairly easy to detect as obfuscation. Throw out anything that uses import, any variables with __ or a handful of builtin functions and you should be good. (There’s only a smallish list of builtins, I couldn’t confidently say which ones to blacklist right now but I do think someone could figure out a safe list without too much trouble.) In fact, I can’t offhand think of any reason a simple bot would use strings except docstrings, maybe throw out anything with those, too.
(Of course my “5% a CloneBot manages to act out” was wrong, so take that for what it’s worth.)
The iterated prisoner’s dilemma with shared source code tournament a few years ago had a lot of simulators, so I assume their rules were more friendly to simulators.
I know of two such—one (results—DMRB was mine) in Haskell where you could simulate but not see source, and an earlier one (results) in Scheme where you could see source.
I think in the Haskell one it would have been hard to figure out you were being simulated. I’m not sure about the scheme one.
So, in case anyone’s wondering what I did...
I cared enough to think and enter, but not to program.
I designed a simulator, but was told it wouldn’t be coded for me, so that was out.
So instead, I wrote this:
Until symmetry breaks, if the round # is 1, is 3 or is even, play sequence 23322232323322233 and then repeat 22222223 until symmetry breaks. If round # is odd and 5 or more, the reverse, except repeating 22222223 at the end.
Once it breaks, alternate 2 and 3 for 4 turns.
Then, if the last turn added to 5, keep doing that until they don’t.
Once they don’t...
If they’ve always played 0 or less, play 5.
If they’ve always played 1 or less, play 4.
If they’ve always played 2 or less, play 3.
Otherwise, depending on round number:
Rounds 1-50: Keep alternating until turn t+10. After that, if last turn added to 5, alternate 2 and 3. Otherwise, check their average score per round after symmetry. If it’s 2.5 or lower, play 2, otherwise play 3.
Rounds 51-100: Same as above, except you also always play 3 if their score is 5 or more higher than yours.
Rounds 101+: Same as above, except you also always play 3 if their score is higher than yours.
(We could improve by adding more logic to properly exploit in strange situations, but we don’t care enough so we won’t.)
That’s it. Keep it simple. Still call it BendBot I guess.
The intention here was pretty basic. Endgame behavior varies by round to get stingier if anyone tries something, to grab early pool share without being exploited later.
The big thing is that this bot is deterministic. I intentionally avoid calling the random function by choosing a semi-random set of 2s and 3s, on the theory that it’s unlikely anyone else would choose an identical sequence, and if I meet myself I get the 2.5 anyway.
If they are not simulating or checking code, it won’t matter.
If they are looking at all, then my not loading their code and not being random tells them the water’s fine, take a look, see what’s going on, and we can cooperate fully—you can see what I’m starting with, and we can get 2.5 each without incident. I’m sad that some people thought that more than two parenthesis was high risk to simulate/examine—I thought that the obvious thing to do was check to see if someone ever loads code or uses a random function, and if you don’t do either, you should be safe.
So the thought was, many of the best bots would be simulator bots and I’d get full cooperation from them, whereas when they faced each other, they’d have to do some random matching to cooperate, so I’d have an edge there, and I’d do reasonably well against anything else that went late unless some alliance was afoot.
Turns out an alliance is afoot after all, but I certainly didn’t care enough to worry about that. Let them come, and let the backstabbers profit, I say.
I was told that I had by far the most complicated non-coded entry even then, and that my endgame logic was being replaced with randomly 50% 2, 50% 3. I was asked, submit as-is, fix it, or withdraw?
That modification definitely didn’t work, and the code that was written was not something I felt OK touching. So I explained why, and suggested it be replaced with this:
If (last round added to 5 or less) play whatever they played last.
Else If (their score > my score and round > 5) play 3.
Else Play 2.
I figured that was one extra line of code and should take like 2 minutes tops, and if that was ‘too complex’ then that was fine, I’d sit out.
So basically, let myself get exploited very early since there would likely be at least one all-3s in the mix but all such things would swiftly lose, then shift to hardcore mode a little faster to keep it simple.
I didn’t get a reply to that, so I don’t know if my entry is in or not. I hope it is, but either way, good luck everyone.
Your entry is in. I implemented the
If (their score > my score and round > 5) play 3. Else Play 2.
algorithm. I hope I got the rest of it right.Seeing people actually use Hy is making me nostalgic!
I love Hy and use it all the time for data science and other applications. Thank you for all your work on the project!
Zack_M_Davis isn’t the only one.
I’ve also written Hissp now. I’m curious how they compare for data science work (and other applications).
I’ve also seen evhub, the author of Coconut, here on LessWrong.
We’ll see how many people submitted simulators braver than mine, but simulators being timid seems like a natural consequence of the rules allowing you to nuke your opponent if you find out that you’re in a simulation, and a common enough perception that simulators might have enough of an advantage that they should be eliminated.
Static analysis is not very useful if the opponent’s code is at all obfuscated, which is likely is if your opponent is looking to nuke simulators. Does your static analysis catch the code getattr(__builtins__, ‘e’ + ‘x’ + ‘e’ + ‘c’)(base64.decode(God knows what)) ? Or however many dozens of other ways there are to do something like that?
The tournament might look significantly different if the rules were slanted in the simulator’s favor, maybe if you just had to avoid infinite simulation loops and keep runtime reasonable, and the worst the opponent was allowed to do if they found they were in a simulation was to play BullyBot in order to extort you or play randomly to make the simulation useless. The iterated prisoner’s dilemma with shared source code tournament a few years ago had a lot of simulators, so I assume their rules were more friendly to simulators.
I do think it would be hard to obfuscate in a way that wasn’t fairly easy to detect as obfuscation. Throw out anything that uses
import
, any variables with__
or a handful of builtin functions and you should be good. (There’s only a smallish list of builtins, I couldn’t confidently say which ones to blacklist right now but I do think someone could figure out a safe list without too much trouble.) In fact, I can’t offhand think of any reason a simple bot would use strings except docstrings, maybe throw out anything with those, too.(Of course my “5% a CloneBot manages to act out” was wrong, so take that for what it’s worth.)
I know of two such—one (results—DMRB was mine) in Haskell where you could simulate but not see source, and an earlier one (results) in Scheme where you could see source.
I think in the Haskell one it would have been hard to figure out you were being simulated. I’m not sure about the scheme one.