even when the agents are unable to explicitly bargain or guarantee their fulfilment of their end by external precommitments
I believe there is a misconception here. The actual game you describe is the game between the programmers, and the fact that they know in advance that the others’ programs will indeed be run with the code that their own program has access to does make each program submission a binding commitment to behave in a certain way.
Game Theory knows since long that if binding commitments are possible, most dilemmas can be solved easily. In other words, I believe this is very nice but is quite far from being the “huge success” you claim it is.
Put differently: The whole thing depends crucially on the fact that X can be certain that Y will use the strategy (=code) X thinks it will use. But how on Earth would a real agent ever be able to know such a thing about another agent?
I believe there is a misconception here. The actual game you describe is the game between the programmers, and the fact that they know in advance that the others’ programs will indeed be run with the code that their own program has access to does make each program submission a binding commitment to behave in a certain way.
Game Theory knows since long that if binding commitments are possible, most dilemmas can be solved easily. In other words, I believe this is very nice but is quite far from being the “huge success” you claim it is.
Put differently: The whole thing depends crucially on the fact that X can be certain that Y will use the strategy (=code) X thinks it will use. But how on Earth would a real agent ever be able to know such a thing about another agent?