In principle (depending upon computation models) this should be possible.
With this very great degree of knowledge of how the other operates, it should be possible to get a binary result by each agent:
choosing some computable real number N and some digit position M,
that they have no current expectation of being biased,
compute it,
use the other’s source code to compute the other’s digit,
combine the digits (e.g. xor for binary),
verify that the other didn’t cheat,
use the result to enact the decision.
In principle, each agent can use the other’s source code to verify that the other will not cheat in any of these steps.
Even if B currently knows a lot more about values of specific numbers than A does, that doesn’t help B get the result they want. B has to choose a number+position that B doesn’t expect to be biased, and A can check whether they really did not expect it to be biased.
Note that this, like almost anything to do with agents verifying each other via source code, is purely theoretical and utterly useless in practice. In practice step 6 will be impossible for at least one party.
Agent A doesn’t know that the creators of agent B didn’t run the whole interaction with a couple of different versions of B’s code until finding one that results in N and M that produce the bit they want. You can’t deduce that by polluting at B’s code.
I’m very confused what the model is here. Are you saying that agents A and B (with source code) are just proxies created by other agents C and D (internal details of which are unknown to the agents on the other side of the communication/acausal barrier)?
What is the actual mechanism by which A knows B’s source code and vice versa, without any communication or any causal links? How does A know that D won’t just ignore whatever decision B makes and vice versa?
In principle (depending upon computation models) this should be possible.
With this very great degree of knowledge of how the other operates, it should be possible to get a binary result by each agent:
choosing some computable real number N and some digit position M,
that they have no current expectation of being biased,
compute it,
use the other’s source code to compute the other’s digit,
combine the digits (e.g. xor for binary),
verify that the other didn’t cheat,
use the result to enact the decision.
In principle, each agent can use the other’s source code to verify that the other will not cheat in any of these steps.
Even if B currently knows a lot more about values of specific numbers than A does, that doesn’t help B get the result they want. B has to choose a number+position that B doesn’t expect to be biased, and A can check whether they really did not expect it to be biased.
Note that this, like almost anything to do with agents verifying each other via source code, is purely theoretical and utterly useless in practice. In practice step 6 will be impossible for at least one party.
Agent A doesn’t know that the creators of agent B didn’t run the whole interaction with a couple of different versions of B’s code until finding one that results in N and M that produce the bit they want. You can’t deduce that by polluting at B’s code.
I’m very confused what the model is here. Are you saying that agents A and B (with source code) are just proxies created by other agents C and D (internal details of which are unknown to the agents on the other side of the communication/acausal barrier)?
What is the actual mechanism by which A knows B’s source code and vice versa, without any communication or any causal links? How does A know that D won’t just ignore whatever decision B makes and vice versa?