This is important, because otherwise the contest devolves into who can submit the most copies of AbsolutismBot (cooperate with programs that share it’s source code, otherwise defect)
I think that any submitted program can be improved by combining it with AbsolutismBot. If you’re playing with someone who submitted the same program for you, cooperate (they can’t defect against you in this scenario, since they’re running identical source code). If they aren’t running the same code, run whatever program lies underneath it.
I think this could get generalized to cooperation with everyone who has the AbsolutismBot “wrapper”, since it doesn’t matter what the code after the AbsolutismBot section does. In English (since I don’t know how to program in Scheme), the program would be like this:
If the first 117 characters of the other program are the same as the first 117 characters of this program, cooperate. Otherwise, do some other strategy.
All players that implement this strategy will cooperate with each other. Granted, this doesn’t help them win the tournament since it isn’t a relative advantage compared to other AbsolutismBots—it just makes everyone who doesn’t do this lose the tournament.
Except that over some threshold, any Anti-Absolutism bots (which have some way of “escaping” while still containing the same first 117 characters, like having a C preprocessor directive that redefines TRUE to equal FALSE) would necessarily be superior.
Interesting. Really, what you want (in slightly more generality) is to cooperate with anyone who can prove they will cooperate if you yourself can prove you will cooperate under the same condition.
That is, if from their source, you can prove “they cooperate if they can prove this condition about me”, then you cooperate.
Of course, it’s not generally possible to prove things about a program given its source, especially at this level of self-reference. In this particular case the “generally” in there is important. It is in your interest for other programs to be able to prove this property about you. This is beyond the scope of the tournament, obviously, but given extremely sophisticated players, every player benefits from adding this property (if possible). You might even want to include a subprogram which produces a proof!
Define the “Reasonable” property reflexively: a program is “Reasonable” if it provably cooperates with any program it can prove is Reasonable.
It is in the interest of every program to be Reasonable*. In fact, it is in the interest of every program both to be Reasonable and to exhibit a proof of its own Reasonableness. (You might even define that into the Reasonable property: don’t just require provable (conditional) cooperation, but require the exhibition of a proof of conditional cooperation.)
Potentially you might also expand the definition to require efficient proofs—say, at most a thousand instructions.
On the other hand, if you’re playing with aliens, there’s not necessarily going to be a way you can clearly establish which part of your source is the proof of your Reasonableness. So you want it to be as easy as possible for someone else to prove that you are Reasonable.
I’ll happily expand / reword this if it’s at all unclear.
*Oh—this is maybe untrue. If you are really good at getting other programs to cooperate and then defecting, you are better served by not being reasonable.
Define the “Reasonable” property reflexively: a program is “Reasonable” if it provably cooperates with any program it can prove is Reasonable.
I’m not sure your definition defines a unique “reasonable” subset of programs. There are many different cliques of mutually cooperating programs. For example, you could say the “reasonable” subset consists of one program that cooperates only with exact copies of itself, and that would be consistent with your definition, unless I’m missing something.
Maybe defined the Reasonable’ set of programs to be the maximal Reasonable set? That is, a set is Reasonable if it has the property as described, then take the maximal such set to be the Reasonable’ set (I’m pretty sure this is guaranteed to exist by Zorn’s Lemma, but it’s been a while...)
Zorn’s lemma doesn’t give you uniqueness either. Also, maximal under which partial order? If you mean maximal under inclusion, then my one-element set seems to be already maximal :-)
Just “there exists a valid proof in PA”. If you’re playing with bounded time, then you want efficient proofs; in that case the definition would be as you have it.
At that point you can’t implement it in a halting Turing machine. I’m not sure what PA can prove about the behavior of something that isn’t a halting Turing machine regarding a particular input.
By “implement it”, you mean, one can’t verify something is Reasonable on a halting TM? Not in general, of course. You can for certain machines, though, particularly if they come with their own proofs.
Note that the definition is that Reasonable programs cooperate with those they can prove are Reasonable, not programs which are Reasonable.
So then a program which can present a flawed proof which is not necessarily recognizable as flawed to machines of equivalent scale, can dominate over those other machines?
Also, if we want this contest to be a model of strategies in the real world, shouldn’t there be a fractional point-cost for program size, to model the fact that computation is expensive?
I.e., simpler programs should win over more complex programs, all else being equal.
Perhaps the most accurate model would include a small payoff penalty per codon included in your program, and a larger payoff penalty per line of codon actually executed.
It’s quite straightforward to write an algorithm which accepts only valid proofs (but might also reject some proofs which are valid, though in first-order logic you can do away with this caveat). Flawed proofs are not an issue—if A presents a proof which B is unable to verify, B ignores it.
A proves that the logic A uses to prove that B is Reasonable is inconsistent. It is sufficient to say “If I can prove that B is Reasonable, B is Reasonable”.
Except we’re not allowed to use anyone else’s source code, so the test could just as easily be simplified to “if opponent source contains integer 1415926535, cooperate”(for a randomly chosen 1415926535).
It’s not necessarily best to cooperate with everyone with the “AbsolutismBot” wrapper. This guarantees that you and the other program will both cooperate, but without the wrapper it might have been the case that you would defect and the other program would cooperate, which is better for you.
This is important, because otherwise the contest devolves into who can submit the most copies of AbsolutismBot (cooperate with programs that share it’s source code, otherwise defect)
I think that any submitted program can be improved by combining it with AbsolutismBot. If you’re playing with someone who submitted the same program for you, cooperate (they can’t defect against you in this scenario, since they’re running identical source code). If they aren’t running the same code, run whatever program lies underneath it.
I think this could get generalized to cooperation with everyone who has the AbsolutismBot “wrapper”, since it doesn’t matter what the code after the AbsolutismBot section does. In English (since I don’t know how to program in Scheme), the program would be like this:
If the first 117 characters of the other program are the same as the first 117 characters of this program, cooperate. Otherwise, do some other strategy.
All players that implement this strategy will cooperate with each other. Granted, this doesn’t help them win the tournament since it isn’t a relative advantage compared to other AbsolutismBots—it just makes everyone who doesn’t do this lose the tournament.
Except that over some threshold, any Anti-Absolutism bots (which have some way of “escaping” while still containing the same first 117 characters, like having a C preprocessor directive that redefines TRUE to equal FALSE) would necessarily be superior.
Previously on LW.
Interesting. Really, what you want (in slightly more generality) is to cooperate with anyone who can prove they will cooperate if you yourself can prove you will cooperate under the same condition.
That is, if from their source, you can prove “they cooperate if they can prove this condition about me”, then you cooperate.
Of course, it’s not generally possible to prove things about a program given its source, especially at this level of self-reference. In this particular case the “generally” in there is important. It is in your interest for other programs to be able to prove this property about you. This is beyond the scope of the tournament, obviously, but given extremely sophisticated players, every player benefits from adding this property (if possible). You might even want to include a subprogram which produces a proof!
Let me try to clear that up.
Define the “Reasonable” property reflexively: a program is “Reasonable” if it provably cooperates with any program it can prove is Reasonable.
It is in the interest of every program to be Reasonable*. In fact, it is in the interest of every program both to be Reasonable and to exhibit a proof of its own Reasonableness. (You might even define that into the Reasonable property: don’t just require provable (conditional) cooperation, but require the exhibition of a proof of conditional cooperation.)
Potentially you might also expand the definition to require efficient proofs—say, at most a thousand instructions.
On the other hand, if you’re playing with aliens, there’s not necessarily going to be a way you can clearly establish which part of your source is the proof of your Reasonableness. So you want it to be as easy as possible for someone else to prove that you are Reasonable.
I’ll happily expand / reword this if it’s at all unclear.
*Oh—this is maybe untrue. If you are really good at getting other programs to cooperate and then defecting, you are better served by not being reasonable.
I’m not sure your definition defines a unique “reasonable” subset of programs. There are many different cliques of mutually cooperating programs. For example, you could say the “reasonable” subset consists of one program that cooperates only with exact copies of itself, and that would be consistent with your definition, unless I’m missing something.
Point. Not sure how to fix that.
Maybe defined the Reasonable’ set of programs to be the maximal Reasonable set? That is, a set is Reasonable if it has the property as described, then take the maximal such set to be the Reasonable’ set (I’m pretty sure this is guaranteed to exist by Zorn’s Lemma, but it’s been a while...)
Zorn’s lemma doesn’t give you uniqueness either. Also, maximal under which partial order? If you mean maximal under inclusion, then my one-element set seems to be already maximal :-)
Clarify what you mean by the various conjugations of proof: Do you mean “There exists a valid proof in PA with a Godel number less than N”?
Just “there exists a valid proof in PA”. If you’re playing with bounded time, then you want efficient proofs; in that case the definition would be as you have it.
At that point you can’t implement it in a halting Turing machine. I’m not sure what PA can prove about the behavior of something that isn’t a halting Turing machine regarding a particular input.
By “implement it”, you mean, one can’t verify something is Reasonable on a halting TM? Not in general, of course. You can for certain machines, though, particularly if they come with their own proofs.
Note that the definition is that Reasonable programs cooperate with those they can prove are Reasonable, not programs which are Reasonable.
So then a program which can present a flawed proof which is not necessarily recognizable as flawed to machines of equivalent scale, can dominate over those other machines?
Also, if we want this contest to be a model of strategies in the real world, shouldn’t there be a fractional point-cost for program size, to model the fact that computation is expensive?
I.e., simpler programs should win over more complex programs, all else being equal.
Perhaps the most accurate model would include a small payoff penalty per codon included in your program, and a larger payoff penalty per line of codon actually executed.
EDIT: What’s wrong with this post?
(I didn’t downvote you.)
It’s quite straightforward to write an algorithm which accepts only valid proofs (but might also reject some proofs which are valid, though in first-order logic you can do away with this caveat). Flawed proofs are not an issue—if A presents a proof which B is unable to verify, B ignores it.
There’s someone who consistently downvotes everything I ever write whenever he comes onto the site; I’m not sure what to do about that.
A proves that A is inconsistent, then proves that A cooperates with every program that A proves is Reasonable and that B is reasonable.
B accepts A’s proof that A is inconsistent, and the rest follow trivially.
I’m not sure I understand. A is a TM—which aspect is it proving inconsistent?
A proves that the logic A uses to prove that B is Reasonable is inconsistent. It is sufficient to say “If I can prove that B is Reasonable, B is Reasonable”.
Except we’re not allowed to use anyone else’s source code, so the test could just as easily be simplified to “if opponent source contains integer 1415926535, cooperate”(for a randomly chosen 1415926535).
It’s not necessarily best to cooperate with everyone with the “AbsolutismBot” wrapper. This guarantees that you and the other program will both cooperate, but without the wrapper it might have been the case that you would defect and the other program would cooperate, which is better for you.