It could be more extreme than this result implies-I’m not sure how realistic the modeling costs are.
When A is looking for a proof about B(A), and B looking for a proof about A(B), it seems they should either
1) have the same cost which is proportional to the sum of their complexity as measured by length, since A+B=B+A or
2) have the cost for each time a program models the other program be dependent on the other program’s length-proving CooperateBot cooperates with you should be easier if only because Cooperate Bot is a shorter program, which isn’t trying to prove things about you, which means the fact it has your source code doesn’t matter—if it did, this would effectively increases its length. I think FairBot cooperating with itself requires a bit more of a search than CooperateBot doing so, and should have an appropriate cost.
FairBot shouldn’t just be penalized epsilon for modeling CooperateBot, it should be penalized based on the length of Cooperate Bot.
CooperateBot is defined by CB(X)↔⊤.
DefectBot is defined by DB(X)↔⊥.
FairBot is defined by FB(X)↔□(X(FB)).
PrudentBot is defined by PB(X)↔□(X(PB)∧(X(DB)→□⊥)).
Their definitions here also reflect that the modeling bots are longer, especially PrudentBot.
TL;DR
The costs could just be (a function of) the length of the proof*, or the amount of time it took to find and check.
*Still reasonable with caching, although costs would be much smaller.
I always like Game Theory that has simple advice applicable to real life: If you want people to cooperate with you, Be Comprehensible!
It could be more extreme than this result implies-I’m not sure how realistic the modeling costs are.
When A is looking for a proof about B(A), and B looking for a proof about A(B), it seems they should either
1) have the same cost which is proportional to the sum of their complexity as measured by length, since A+B=B+A or
2) have the cost for each time a program models the other program be dependent on the other program’s length-proving CooperateBot cooperates with you should be easier if only because Cooperate Bot is a shorter program, which isn’t trying to prove things about you, which means the fact it has your source code doesn’t matter—if it did, this would effectively increases its length. I think FairBot cooperating with itself requires a bit more of a search than CooperateBot doing so, and should have an appropriate cost.
FairBot shouldn’t just be penalized epsilon for modeling CooperateBot, it should be penalized based on the length of Cooperate Bot.
Their definitions here also reflect that the modeling bots are longer, especially PrudentBot.
TL;DR
The costs could just be (a function of) the length of the proof*, or the amount of time it took to find and check.
*Still reasonable with caching, although costs would be much smaller.