if they have common knowledge of each other’s source code.
Isn’t this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...
Alternatively, I’m considering all the strategies I could use, based on looking at my opponent’s strategy, and one of them is “Cooperate only if the opponent, when playing against himself, would defect.”
“Common knowledge of each other’s rationality” doesn’t seem to help. Knowing I use TDT doesn’t give someone the ability to make the same computation I do, and so engage TDT. They have to actually look into my brain, which means they need a bigger brain, which means I can’t look into their brain. If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I’m going to defect. And win. Rationalists should win.
Knowing I use TDT doesn’t give someone the ability to make the same computation I do, and so engage TDT.
It is possible to predict the output of a system without emulating the system. We can use the idea ‘of emulating their behavior’ if it helps as an intuition pump but to assume that it is required is a mistake.
If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I’m going to defect. And win.
Why on earth would I cooperate with you? You just told me you were going to defect!
(But I do respect your grappling with the problem. It is NOT trivial. Well, I should say it is trivial but it is hard to get your head around it, particularly with our existing intuitions.)
A = “Preceded by it’s own quotation with A’s and B’s swapped is B’s source code” preceded by it’s own quotation with A’s and B’s swapped is B’s source code.
B = “Preceded by it’s own quotation with B’s and A’s swapped is A’s source code” preceded by it’s own quotation with B’s and A’s swapped is A’s source code.
A and B each now contain the other’s source code.
Edit: I used “followed” when it should have been “preceded”.
Isn’t this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...
No. If you know all relevant data yourself you don’t have to know it again just because B knows it. That is just a naive, inefficient way to implement the ‘source code’. Call the code ‘DRY’ for example. Or consider it an instruction to do a ‘shallow copy’ and a ‘memory free’ after a getting a positive result for a ‘deep compare’.
Isn’t this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...
The idea is that A and B are passed each other’s source code as input (and know their own source code thanks to that theorem that guarantees that Turing machines have access to their own source code WLOG, which I think DanielLC’s comment proves). There’s no reason you can’t do this, although you won’t be able to deduce whether your opponent halts and so forth.
Alternatively, I’m considering all the strategies I could use, based on looking at my opponent’s strategy, and one of them is “Cooperate only if the opponent, when playing against himself, would defect.”
Your opponent might not halt when given himself as input.
If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I’m going to defect. And win. Rationalists should win.
The problem with your plan is that TDT agents don’t always cooperate. I will only cooperate if I have reason to believe that you and I are similar enough that we will decide to do the same thing for the same reasons. I hate to burst your bubble, but you are not the first person in all of recorded history to think of this. Other people are allowed to be smart too. If you come up with a clever reason to defect when playing against me, it is very possible (perhaps even likely, although I don’t know you all that well) that I will think of it too.
I know this post is long, long dead but:
Isn’t this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...
Alternatively, I’m considering all the strategies I could use, based on looking at my opponent’s strategy, and one of them is “Cooperate only if the opponent, when playing against himself, would defect.”
“Common knowledge of each other’s rationality” doesn’t seem to help. Knowing I use TDT doesn’t give someone the ability to make the same computation I do, and so engage TDT. They have to actually look into my brain, which means they need a bigger brain, which means I can’t look into their brain. If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I’m going to defect. And win. Rationalists should win.
It is possible to predict the output of a system without emulating the system. We can use the idea ‘of emulating their behavior’ if it helps as an intuition pump but to assume that it is required is a mistake.
Why on earth would I cooperate with you? You just told me you were going to defect!
(But I do respect your grappling with the problem. It is NOT trivial. Well, I should say it is trivial but it is hard to get your head around it, particularly with our existing intuitions.)
A = “Preceded by it’s own quotation with A’s and B’s swapped is B’s source code” preceded by it’s own quotation with A’s and B’s swapped is B’s source code. B = “Preceded by it’s own quotation with B’s and A’s swapped is A’s source code” preceded by it’s own quotation with B’s and A’s swapped is A’s source code.
A and B each now contain the other’s source code.
Edit: I used “followed” when it should have been “preceded”.
No. If you know all relevant data yourself you don’t have to know it again just because B knows it. That is just a naive, inefficient way to implement the ‘source code’. Call the code ‘DRY’ for example. Or consider it an instruction to do a ‘shallow copy’ and a ‘memory free’ after a getting a positive result for a ‘deep compare’.
The idea is that A and B are passed each other’s source code as input (and know their own source code thanks to that theorem that guarantees that Turing machines have access to their own source code WLOG, which I think DanielLC’s comment proves). There’s no reason you can’t do this, although you won’t be able to deduce whether your opponent halts and so forth.
Your opponent might not halt when given himself as input.
The problem with your plan is that TDT agents don’t always cooperate. I will only cooperate if I have reason to believe that you and I are similar enough that we will decide to do the same thing for the same reasons. I hate to burst your bubble, but you are not the first person in all of recorded history to think of this. Other people are allowed to be smart too. If you come up with a clever reason to defect when playing against me, it is very possible (perhaps even likely, although I don’t know you all that well) that I will think of it too.