my intuition is that by conforming to one agent’s code analysis routines, I lose part of my abilities, which may make me unable to conform to other agent’s code analysis routines.
Any decision restricts what happens, for all you knew before making the decision, but doesn’t necessarily make future decisions more difficult. Coordinating with other agents requires deciding some properties of your behavior, which may as well constrain only the actions that need to be coordinated with other agents.
For example, strategy is a kind of generalized action, which could take the form of a straightforwardly represented algorithm chosen for a certain situation (to act in response to possible future observations). After a strategy is played out, or if some condition indicates that it’s no longer applicable, decision making may resume its normal more general operation, so the mode of operation where your behavior becomes more tractable may be temporary. If this strategy includes a procedure for deciding whether to cooperate with similarly chosen strategies of other agents, it will do the trick, without taking on much more responsibility than a single action. It will just be the kind of action that’s smart enough to be able to cooperate with other agents’ actions.
So it is not necessary to change my whole code, just to create a new transparent “cooperation routine” and let it guide my behavior, with a possibility of ending this routine in case the other agents stop cooperating or something unexpected happens. That makes sense.
(Though in real life I would be rather afraid to self-modify in this way, because an imperfection in the cooperation routine could be exploited. Even if other agents’ cooperation routines contain no bug exploits for my routine, maybe they have already created some hidden sub-agents that will try to find and exploit bugs in my routine.)
Any decision restricts what happens, for all you knew before making the decision, but doesn’t necessarily make future decisions more difficult. Coordinating with other agents requires deciding some properties of your behavior, which may as well constrain only the actions that need to be coordinated with other agents.
For example, strategy is a kind of generalized action, which could take the form of a straightforwardly represented algorithm chosen for a certain situation (to act in response to possible future observations). After a strategy is played out, or if some condition indicates that it’s no longer applicable, decision making may resume its normal more general operation, so the mode of operation where your behavior becomes more tractable may be temporary. If this strategy includes a procedure for deciding whether to cooperate with similarly chosen strategies of other agents, it will do the trick, without taking on much more responsibility than a single action. It will just be the kind of action that’s smart enough to be able to cooperate with other agents’ actions.
So it is not necessary to change my whole code, just to create a new transparent “cooperation routine” and let it guide my behavior, with a possibility of ending this routine in case the other agents stop cooperating or something unexpected happens. That makes sense.
(Though in real life I would be rather afraid to self-modify in this way, because an imperfection in the cooperation routine could be exploited. Even if other agents’ cooperation routines contain no bug exploits for my routine, maybe they have already created some hidden sub-agents that will try to find and exploit bugs in my routine.)
A real life analogy is a contract, with powerful government enforcing your precommitments.