By mutation I mean set!, or anything else that can change the value of a variable; things ending in ‘!’. namespace-set-variable-value! does technically mutate things but looking at the way people actually used it I don’t think it would have caused a problem?
Ah, mutation in the “mutable state” sense. When I was doing some light experimenting with static analysis in the early days of the contest, I looked for variables storing anything involving eval (or involving another variable storing anything involving eval, etc.), and just treated (sounds like that should be “trought” or something) set! calls as define calls—another chance for a variable to become contaminated with evalness. Could you give an example of a case where set! makes things harder to analyze?
OK. Let me explain what the basic idea of my program was. The idea was to analyze the opponent’s program in an attempt to answer the following question: What is the probability that I can affect the outcome of their program? Now that I’ve thought about this some more, this makes less sense than I initially thought; if there’s a second round (and I have time to program something as complicated as this), I’m going to try the rather more sensible “How much I can I affect the probability distribution of their program’s output?” But for now let’s go with the original idea, even though it doesn’t make sense in general.
So the idea was to compute this probability, p, and then cooperate if p>=1/2. (The number 1⁄2 is not arbitrary and is based on the particular values for T, R, P, and S used. Actually if the numbers were different, there wouldn’t be a single p, it would depend on other things, but I’m not going to into that, because this strategy doesn’t actually make sense.)
So e.g. the strategy should defect against CooperateBot, DefectBot, and any sort of RandomBot; it would cooperate with (and get defected against by) Watson’s entry; it would cooperate with (and get cooperated with) by what So8res called a “rank N MimicBot”, so long as N>=2; and it would cooperate with what solipsist called a Mimicbot so long as “cooperate anyway” probability was at most 1⁄2, otherwise it would defect.
OK. So let’s say I’m running my program and it comes up against one of these MimicBots. It gets to a part that looks like
(if (random 100) 'C ((eval opponent) self))
What does it do? Well, it has to check both branches and see that in the first, which occurs with probability 1⁄100, it can’t affect the outcome, and in the second, which occurs with probability 99⁄100, it can. (Really it’s a bit more complicated than that—how does it get the probabilities? But A. the strategy doesn’t actually quite make sense anyway and B. I’m trying to keep this simple.)
Thing is, in order to check both branches, it basically has to (partially) run them. To be honest, “static analysis” is maybe not the best description of what my program was supposed to do—it wasn’t so much “statically analyze the given program”, so much as “statically transform it into a different program, and then run that”. It has to partially run them so that it’s not fooled by conditionals like (if (= 0 1) ‘C ’D); it shouldn’t take branches that are simply never going to happen.
And that leads to the problem—suppose instead it were to come to
(if (random 100) (set! result 'C) (set! result ((eval opponent) self)))
Now running both branches is no longer going to yield a sensible result. I’m not sure what it would yield, but it wouldn’t yield anything helpful. It gets worse if instead of being the result, the variable set is some sort of intermediate...
In this case (notionally), the (transformed) conditional would end up returning a data structure representing “probability 1⁄100 of always C, probability 99⁄100 of C or D depending on what I do”, and then result would be set to that data structure. The (transformed) define itself doesn’t return a value, but that’s OK, it’s not supposed to, it’s just supposed to influence the value of something else.
In the above case, result would first be set to a data structure representing “probability 1 of C” and then to a data structure representing “probability 1 of C or D depending on what I do”. Then the (transformed) conditional itself would return something undefined, since the return value of set! is undefined. (And OK, it wasn’t supposed to return a value either, but its influence over other parts of the program has been screwed up.)
Basically, my (start of a) program was making the assumption that the return value of a thing was what was important—or at least that, while side effects might exist, they wouldn’t screw things up elsewhere. (With a few exceptions for special side-effecty functions which I intended to handle specially.) But allowing mutation would mean having to write my program to keep track of all of the simulated variables, whereas without it I can just focus on return values and let the Scheme interpreter handle that.
By mutation I mean set!, or anything else that can change the value of a variable; things ending in ‘!’. namespace-set-variable-value! does technically mutate things but looking at the way people actually used it I don’t think it would have caused a problem?
Ah, mutation in the “mutable state” sense. When I was doing some light experimenting with static analysis in the early days of the contest, I looked for variables storing anything involving eval (or involving another variable storing anything involving eval, etc.), and just treated (sounds like that should be “trought” or something) set! calls as define calls—another chance for a variable to become contaminated with evalness. Could you give an example of a case where set! makes things harder to analyze?
OK. Let me explain what the basic idea of my program was. The idea was to analyze the opponent’s program in an attempt to answer the following question: What is the probability that I can affect the outcome of their program? Now that I’ve thought about this some more, this makes less sense than I initially thought; if there’s a second round (and I have time to program something as complicated as this), I’m going to try the rather more sensible “How much I can I affect the probability distribution of their program’s output?” But for now let’s go with the original idea, even though it doesn’t make sense in general.
So the idea was to compute this probability, p, and then cooperate if p>=1/2. (The number 1⁄2 is not arbitrary and is based on the particular values for T, R, P, and S used. Actually if the numbers were different, there wouldn’t be a single p, it would depend on other things, but I’m not going to into that, because this strategy doesn’t actually make sense.)
So e.g. the strategy should defect against CooperateBot, DefectBot, and any sort of RandomBot; it would cooperate with (and get defected against by) Watson’s entry; it would cooperate with (and get cooperated with) by what So8res called a “rank N MimicBot”, so long as N>=2; and it would cooperate with what solipsist called a Mimicbot so long as “cooperate anyway” probability was at most 1⁄2, otherwise it would defect.
OK. So let’s say I’m running my program and it comes up against one of these MimicBots. It gets to a part that looks like
What does it do? Well, it has to check both branches and see that in the first, which occurs with probability 1⁄100, it can’t affect the outcome, and in the second, which occurs with probability 99⁄100, it can. (Really it’s a bit more complicated than that—how does it get the probabilities? But A. the strategy doesn’t actually quite make sense anyway and B. I’m trying to keep this simple.)
Thing is, in order to check both branches, it basically has to (partially) run them. To be honest, “static analysis” is maybe not the best description of what my program was supposed to do—it wasn’t so much “statically analyze the given program”, so much as “statically transform it into a different program, and then run that”. It has to partially run them so that it’s not fooled by conditionals like (if (= 0 1) ‘C ’D); it shouldn’t take branches that are simply never going to happen.
And that leads to the problem—suppose instead it were to come to
Now running both branches is no longer going to yield a sensible result. I’m not sure what it would yield, but it wouldn’t yield anything helpful. It gets worse if instead of being the result, the variable set is some sort of intermediate...
I still don’t see where set! is relevantly different from define… e.g.
In this case (notionally), the (transformed) conditional would end up returning a data structure representing “probability 1⁄100 of always C, probability 99⁄100 of C or D depending on what I do”, and then result would be set to that data structure. The (transformed) define itself doesn’t return a value, but that’s OK, it’s not supposed to, it’s just supposed to influence the value of something else.
In the above case, result would first be set to a data structure representing “probability 1 of C” and then to a data structure representing “probability 1 of C or D depending on what I do”. Then the (transformed) conditional itself would return something undefined, since the return value of set! is undefined. (And OK, it wasn’t supposed to return a value either, but its influence over other parts of the program has been screwed up.)
Basically, my (start of a) program was making the assumption that the return value of a thing was what was important—or at least that, while side effects might exist, they wouldn’t screw things up elsewhere. (With a few exceptions for special side-effecty functions which I intended to handle specially.) But allowing mutation would mean having to write my program to keep track of all of the simulated variables, whereas without it I can just focus on return values and let the Scheme interpreter handle that.