The processes need to agree on a shared solution algorithm. If the algorithm does consequentialist decision making, it needs to be able to anticipate how a possible joint policy it might suggest to all processes works out (so unless it’s some flavor of FDT, it’s going to be very confused by what’s going on). But in general it could be any algorithm that does anything. The toy example of cooperation in PD can be scaled up to running an algorithm instead of following (C, C), as long as that algorithm (but not its output) is as obvious to both processes as (C, C), as an option for what they might want to do.
So if there is some well-known algorithm, say harsanyi(X, Y), whose adjudication the processes X and Y would predictably abide by, they can each separately verify that fact about the other with reasoning about programs, run the algorithm, and then blindly follow its instructions, secure in the knowledge that the other did the same.
The processes need to agree on a shared solution algorithm. If the algorithm does consequentialist decision making, it needs to be able to anticipate how a possible joint policy it might suggest to all processes works out (so unless it’s some flavor of FDT, it’s going to be very confused by what’s going on). But in general it could be any algorithm that does anything. The toy example of cooperation in PD can be scaled up to running an algorithm instead of following (C, C), as long as that algorithm (but not its output) is as obvious to both processes as (C, C), as an option for what they might want to do.
So if there is some well-known algorithm, say harsanyi(X, Y), whose adjudication the processes X and Y would predictably abide by, they can each separately verify that fact about the other with reasoning about programs, run the algorithm, and then blindly follow its instructions, secure in the knowledge that the other did the same.