This could be fixed with a trivial modification: first check if the opponent source code is identical to your own, and if it is then cooperate. Otherwise run the other bot’s source.
However, there’s another problem: If DuncanS’s strategy plays against a CooperateBot or a random bot, it will throw away a lot of points.
Quite so. I actually never intended to post this—I edited the post several times, and in the end thought I’d abandoned it. But never mind.
I thought of various trivial modifications to fix the problem that you iterate infinitely deep if you meet yourself. You could check for yourself—that works as long as nobody else enters an equivalent strategy—in which case you will do each other in.
I’m not so worried about the cooperateBot or random bot as these will in any case do so badly that they won’t tend to turn up in the population pool. But I could obviously do better against these by defecting all the way. In fact, any algorithm that neither runs me, or checks what I played previously ignores my behaviour, and should be defected against.
The most reliable fix to the recursion problem that I can think of is to implement a hard resource limit, whereby if you spend (say) 9500 of 10000 processing units on analysis, then you drop the analysis and implement a dumb strategy with whatever juice you have left. Probably wouldn’t take much more than a simple counter being incremented in a bottom-level loop somewhere, as long as you’re looking at your opponent’s source through a level of indirection rather than just jumping straight into it. Some analysis strategies might also converge on a solution rather than finding an exact one, although this would be easier and more reliable with behavioral than with source analysis.
However, trapdoors like this are exploitable: they give agents an incentive to nearly but not quite run out the clock before making a decision, in hopes that their opponents will give up and fall back on the simple and stupid. The game theory surrounding this seems to have PD-like characteristics in its own right, so it may just have pushed the relevant game-theoretical decisions up to the metagame.
There is actually a human algorithm in the cooperate/defect space—which is to distrust anyone who seems to spend a long time making a cooperate decision. Anyone who has a really quick, simple way of deciding to cooperate is OK—anyone who takes a long time is thinking about whether you will defect, or whether they should defect—and you should probably find someone else to trust. This is one of the good things about Tit for tat—it passes this test.
However, there’s another problem: If DuncanS’s strategy plays against a CooperateBot or a random bot, it will throw away a lot of points.
That is a problem, but fortunately we know that those competitors will be eliminated before the chameleon, whereupon it will begin to improve its performance.
If the DuncanBot detects a source code different from its own, it runs that source code. So a DuncanBot looks to any CliqueBot like a member of it’s clique.
That may well be right—a cliquebot may well decide I’m not in its clique, and defect. And I, running its source code, will decide that my counterparty IS in the clique, and cooperate. Not a good outcome.
Clearly I need to modify my algorithm so that I run the other bot’s code without bothering to analyse it—and have it play against not its actual opponent, but me, running it.
This could be fixed with a trivial modification: first check if the opponent source code is identical to your own, and if it is then cooperate. Otherwise run the other bot’s source.
However, there’s another problem: If DuncanS’s strategy plays against a CooperateBot or a random bot, it will throw away a lot of points.
Quite so. I actually never intended to post this—I edited the post several times, and in the end thought I’d abandoned it. But never mind.
I thought of various trivial modifications to fix the problem that you iterate infinitely deep if you meet yourself. You could check for yourself—that works as long as nobody else enters an equivalent strategy—in which case you will do each other in.
I’m not so worried about the cooperateBot or random bot as these will in any case do so badly that they won’t tend to turn up in the population pool. But I could obviously do better against these by defecting all the way. In fact, any algorithm that neither runs me, or checks what I played previously ignores my behaviour, and should be defected against.
The most reliable fix to the recursion problem that I can think of is to implement a hard resource limit, whereby if you spend (say) 9500 of 10000 processing units on analysis, then you drop the analysis and implement a dumb strategy with whatever juice you have left. Probably wouldn’t take much more than a simple counter being incremented in a bottom-level loop somewhere, as long as you’re looking at your opponent’s source through a level of indirection rather than just jumping straight into it. Some analysis strategies might also converge on a solution rather than finding an exact one, although this would be easier and more reliable with behavioral than with source analysis.
However, trapdoors like this are exploitable: they give agents an incentive to nearly but not quite run out the clock before making a decision, in hopes that their opponents will give up and fall back on the simple and stupid. The game theory surrounding this seems to have PD-like characteristics in its own right, so it may just have pushed the relevant game-theoretical decisions up to the metagame.
There is actually a human algorithm in the cooperate/defect space—which is to distrust anyone who seems to spend a long time making a cooperate decision. Anyone who has a really quick, simple way of deciding to cooperate is OK—anyone who takes a long time is thinking about whether you will defect, or whether they should defect—and you should probably find someone else to trust. This is one of the good things about Tit for tat—it passes this test.
That is a problem, but fortunately we know that those competitors will be eliminated before the chameleon, whereupon it will begin to improve its performance.
This doesn’t solve the slightly different CliqueBots won’t cooperate with each other problem.
If the DuncanBot detects a source code different from its own, it runs that source code. So a DuncanBot looks to any CliqueBot like a member of it’s clique.
A piece of code that runs a piece of source code does not thereby become that piece of source code.
That may well be right—a cliquebot may well decide I’m not in its clique, and defect. And I, running its source code, will decide that my counterparty IS in the clique, and cooperate. Not a good outcome.
Clearly I need to modify my algorithm so that I run the other bot’s code without bothering to analyse it—and have it play against not its actual opponent, but me, running it.
I meant what happens when a DucanBot meats a DicanBot that does the same thing but has trivial cosmetic differences in its source code?