Define the “Reasonable” property reflexively: a program is “Reasonable” if it provably cooperates with any program it can prove is Reasonable.
It is in the interest of every program to be Reasonable*. In fact, it is in the interest of every program both to be Reasonable and to exhibit a proof of its own Reasonableness. (You might even define that into the Reasonable property: don’t just require provable (conditional) cooperation, but require the exhibition of a proof of conditional cooperation.)
Potentially you might also expand the definition to require efficient proofs—say, at most a thousand instructions.
On the other hand, if you’re playing with aliens, there’s not necessarily going to be a way you can clearly establish which part of your source is the proof of your Reasonableness. So you want it to be as easy as possible for someone else to prove that you are Reasonable.
I’ll happily expand / reword this if it’s at all unclear.
*Oh—this is maybe untrue. If you are really good at getting other programs to cooperate and then defecting, you are better served by not being reasonable.
Define the “Reasonable” property reflexively: a program is “Reasonable” if it provably cooperates with any program it can prove is Reasonable.
I’m not sure your definition defines a unique “reasonable” subset of programs. There are many different cliques of mutually cooperating programs. For example, you could say the “reasonable” subset consists of one program that cooperates only with exact copies of itself, and that would be consistent with your definition, unless I’m missing something.
Maybe defined the Reasonable’ set of programs to be the maximal Reasonable set? That is, a set is Reasonable if it has the property as described, then take the maximal such set to be the Reasonable’ set (I’m pretty sure this is guaranteed to exist by Zorn’s Lemma, but it’s been a while...)
Zorn’s lemma doesn’t give you uniqueness either. Also, maximal under which partial order? If you mean maximal under inclusion, then my one-element set seems to be already maximal :-)
Just “there exists a valid proof in PA”. If you’re playing with bounded time, then you want efficient proofs; in that case the definition would be as you have it.
At that point you can’t implement it in a halting Turing machine. I’m not sure what PA can prove about the behavior of something that isn’t a halting Turing machine regarding a particular input.
By “implement it”, you mean, one can’t verify something is Reasonable on a halting TM? Not in general, of course. You can for certain machines, though, particularly if they come with their own proofs.
Note that the definition is that Reasonable programs cooperate with those they can prove are Reasonable, not programs which are Reasonable.
So then a program which can present a flawed proof which is not necessarily recognizable as flawed to machines of equivalent scale, can dominate over those other machines?
Also, if we want this contest to be a model of strategies in the real world, shouldn’t there be a fractional point-cost for program size, to model the fact that computation is expensive?
I.e., simpler programs should win over more complex programs, all else being equal.
Perhaps the most accurate model would include a small payoff penalty per codon included in your program, and a larger payoff penalty per line of codon actually executed.
It’s quite straightforward to write an algorithm which accepts only valid proofs (but might also reject some proofs which are valid, though in first-order logic you can do away with this caveat). Flawed proofs are not an issue—if A presents a proof which B is unable to verify, B ignores it.
A proves that the logic A uses to prove that B is Reasonable is inconsistent. It is sufficient to say “If I can prove that B is Reasonable, B is Reasonable”.
Let me try to clear that up.
Define the “Reasonable” property reflexively: a program is “Reasonable” if it provably cooperates with any program it can prove is Reasonable.
It is in the interest of every program to be Reasonable*. In fact, it is in the interest of every program both to be Reasonable and to exhibit a proof of its own Reasonableness. (You might even define that into the Reasonable property: don’t just require provable (conditional) cooperation, but require the exhibition of a proof of conditional cooperation.)
Potentially you might also expand the definition to require efficient proofs—say, at most a thousand instructions.
On the other hand, if you’re playing with aliens, there’s not necessarily going to be a way you can clearly establish which part of your source is the proof of your Reasonableness. So you want it to be as easy as possible for someone else to prove that you are Reasonable.
I’ll happily expand / reword this if it’s at all unclear.
*Oh—this is maybe untrue. If you are really good at getting other programs to cooperate and then defecting, you are better served by not being reasonable.
I’m not sure your definition defines a unique “reasonable” subset of programs. There are many different cliques of mutually cooperating programs. For example, you could say the “reasonable” subset consists of one program that cooperates only with exact copies of itself, and that would be consistent with your definition, unless I’m missing something.
Point. Not sure how to fix that.
Maybe defined the Reasonable’ set of programs to be the maximal Reasonable set? That is, a set is Reasonable if it has the property as described, then take the maximal such set to be the Reasonable’ set (I’m pretty sure this is guaranteed to exist by Zorn’s Lemma, but it’s been a while...)
Zorn’s lemma doesn’t give you uniqueness either. Also, maximal under which partial order? If you mean maximal under inclusion, then my one-element set seems to be already maximal :-)
Clarify what you mean by the various conjugations of proof: Do you mean “There exists a valid proof in PA with a Godel number less than N”?
Just “there exists a valid proof in PA”. If you’re playing with bounded time, then you want efficient proofs; in that case the definition would be as you have it.
At that point you can’t implement it in a halting Turing machine. I’m not sure what PA can prove about the behavior of something that isn’t a halting Turing machine regarding a particular input.
By “implement it”, you mean, one can’t verify something is Reasonable on a halting TM? Not in general, of course. You can for certain machines, though, particularly if they come with their own proofs.
Note that the definition is that Reasonable programs cooperate with those they can prove are Reasonable, not programs which are Reasonable.
So then a program which can present a flawed proof which is not necessarily recognizable as flawed to machines of equivalent scale, can dominate over those other machines?
Also, if we want this contest to be a model of strategies in the real world, shouldn’t there be a fractional point-cost for program size, to model the fact that computation is expensive?
I.e., simpler programs should win over more complex programs, all else being equal.
Perhaps the most accurate model would include a small payoff penalty per codon included in your program, and a larger payoff penalty per line of codon actually executed.
EDIT: What’s wrong with this post?
(I didn’t downvote you.)
It’s quite straightforward to write an algorithm which accepts only valid proofs (but might also reject some proofs which are valid, though in first-order logic you can do away with this caveat). Flawed proofs are not an issue—if A presents a proof which B is unable to verify, B ignores it.
There’s someone who consistently downvotes everything I ever write whenever he comes onto the site; I’m not sure what to do about that.
A proves that A is inconsistent, then proves that A cooperates with every program that A proves is Reasonable and that B is reasonable.
B accepts A’s proof that A is inconsistent, and the rest follow trivially.
I’m not sure I understand. A is a TM—which aspect is it proving inconsistent?
A proves that the logic A uses to prove that B is Reasonable is inconsistent. It is sufficient to say “If I can prove that B is Reasonable, B is Reasonable”.