if both participants are superintelligent and can simulate each other before submitting answers[1], and if the value on outcomes is something like: loss 0, draw 0.5, win 1, (game never begins 0.5), then i think one of these happens:
the game ends in a draw as you say
you collaborate to either win or lose 50% of the time (same EV)
it fails to begin because you’re both programs that try to simulate the other and this is infinitely recursive / itself non-terminating.
even if the relevant code which describes the ASI’s competitor’s policy is >2N, it’s not stated that the ASI is not able execute code of that length prior to its submission.
there’s technically an asymmetry where if the competitor’s policy’s code is >2N, then the ASI can’t include it in their submission, but i don’t see how this would effect the outcome
if both participants are superintelligent and can simulate each other before submitting answers[1], and if the value on outcomes is something like: loss 0, draw 0.5, win 1, (game never begins 0.5), then i think one of these happens:
the game ends in a draw as you say
you collaborate to either win or lose 50% of the time (same EV)
it fails to begin because you’re both programs that try to simulate the other and this is infinitely recursive / itself non-terminating.
even if the relevant code which describes the ASI’s competitor’s policy is >2N, it’s not stated that the ASI is not able execute code of that length prior to its submission.
there’s technically an asymmetry where if the competitor’s policy’s code is >2N, then the ASI can’t include it in their submission, but i don’t see how this would effect the outcome