I wouldn’t call a program that fails to cooperate with itself “optimal”. My program is more optimal than yours, because it cooperates with itself in finite time. :-)
But this is an interesting direction of inquiry too. Is there a non-contradictory way to add something like magical predictor oracles to scenario 1? Would the resulting problem be mathematically interesting? Eliezer seems to think yes, unless I’ve misunderstood his position… but he refuses to disclose the math.
I wouldn’t call a program that fails to cooperate with itself “optimal”. My program is more optimal than yours, because it cooperates with itself in finite time. :-)
But this is an interesting direction of inquiry too. Is there a non-contradictory way to add something like magical predictor oracles to scenario 1? Would the resulting problem be mathematically interesting? Eliezer seems to think yes, unless I’ve misunderstood his position… but he refuses to disclose the math.