At that point you can’t implement it in a halting Turing machine. I’m not sure what PA can prove about the behavior of something that isn’t a halting Turing machine regarding a particular input.
By “implement it”, you mean, one can’t verify something is Reasonable on a halting TM? Not in general, of course. You can for certain machines, though, particularly if they come with their own proofs.
Note that the definition is that Reasonable programs cooperate with those they can prove are Reasonable, not programs which are Reasonable.
So then a program which can present a flawed proof which is not necessarily recognizable as flawed to machines of equivalent scale, can dominate over those other machines?
Also, if we want this contest to be a model of strategies in the real world, shouldn’t there be a fractional point-cost for program size, to model the fact that computation is expensive?
I.e., simpler programs should win over more complex programs, all else being equal.
Perhaps the most accurate model would include a small payoff penalty per codon included in your program, and a larger payoff penalty per line of codon actually executed.
It’s quite straightforward to write an algorithm which accepts only valid proofs (but might also reject some proofs which are valid, though in first-order logic you can do away with this caveat). Flawed proofs are not an issue—if A presents a proof which B is unable to verify, B ignores it.
A proves that the logic A uses to prove that B is Reasonable is inconsistent. It is sufficient to say “If I can prove that B is Reasonable, B is Reasonable”.
At that point you can’t implement it in a halting Turing machine. I’m not sure what PA can prove about the behavior of something that isn’t a halting Turing machine regarding a particular input.
By “implement it”, you mean, one can’t verify something is Reasonable on a halting TM? Not in general, of course. You can for certain machines, though, particularly if they come with their own proofs.
Note that the definition is that Reasonable programs cooperate with those they can prove are Reasonable, not programs which are Reasonable.
So then a program which can present a flawed proof which is not necessarily recognizable as flawed to machines of equivalent scale, can dominate over those other machines?
Also, if we want this contest to be a model of strategies in the real world, shouldn’t there be a fractional point-cost for program size, to model the fact that computation is expensive?
I.e., simpler programs should win over more complex programs, all else being equal.
Perhaps the most accurate model would include a small payoff penalty per codon included in your program, and a larger payoff penalty per line of codon actually executed.
EDIT: What’s wrong with this post?
(I didn’t downvote you.)
It’s quite straightforward to write an algorithm which accepts only valid proofs (but might also reject some proofs which are valid, though in first-order logic you can do away with this caveat). Flawed proofs are not an issue—if A presents a proof which B is unable to verify, B ignores it.
There’s someone who consistently downvotes everything I ever write whenever he comes onto the site; I’m not sure what to do about that.
A proves that A is inconsistent, then proves that A cooperates with every program that A proves is Reasonable and that B is reasonable.
B accepts A’s proof that A is inconsistent, and the rest follow trivially.
I’m not sure I understand. A is a TM—which aspect is it proving inconsistent?
A proves that the logic A uses to prove that B is Reasonable is inconsistent. It is sufficient to say “If I can prove that B is Reasonable, B is Reasonable”.