How do you play “cooperate iff (the opponent cooperates iff I cooperate)” in a GLT? Is the programmer supposed to be modeling the opponent AI in sufficient resolution to guess how much the opponent AI knows about the programmer’s decision, and how many other possible programmers that the AI is modeling are likely to correlate with it? Does S compute the programmer’s decision using S’s knowledge or only the programmer’s knowledge? Does S compute the opponent inaccurately as if it were modeling only the programmer, or accurately as if it were modeling both the programmer and S?
I suppose that a strict CDT could replace itself with a GLT, if that GLT can take into account all info where the opponent AI gets a glimpse at the GLT after it’s written. Then the GLT behaves just like the code I specified before on e.g. Newcomb’s Problem—one-box if Omega glimpses the GLT or gets evidence about it after the GLT was written, two-box if Omega perfectly knows your code 5 seconds before the GLT gets written.
[Edit: Don’t bother responding to this yet. I need to think this through.]
How do you play “cooperate iff (the opponent cooperates iff I cooperate)” in a GLT?
I’m not sure this question makes sense. Can you give an example?
Does S compute the programmer’s decision using S’s knowledge or only the programmer’s knowledge?
S should take the programmer R’s prior and memories/sensory data at the time of coding, and compute a posterior probability distribution using them (assuming it would do a better job at this than R). Then use that to compute R’s expected utility for the purpose of computing the optimal GLT. This falls out of the idea that S is trying to approximate what the GLT would be if R had logical omniscience.
Is the programmer supposed to be modeling the opponent AI in sufficient resolution to guess how much the AI knows about the programmer?
No, S will do it.
Does S compute the opponent as if it were modeling only the programmer, or both the programmer and S?
I guess both, but I don’t understand the significance of this question.
How do you play “cooperate iff (the opponent cooperates iff I cooperate)” in a GLT? Is the programmer supposed to be modeling the opponent AI in sufficient resolution to guess how much the opponent AI knows about the programmer’s decision, and how many other possible programmers that the AI is modeling are likely to correlate with it? Does S compute the programmer’s decision using S’s knowledge or only the programmer’s knowledge? Does S compute the opponent inaccurately as if it were modeling only the programmer, or accurately as if it were modeling both the programmer and S?
I suppose that a strict CDT could replace itself with a GLT, if that GLT can take into account all info where the opponent AI gets a glimpse at the GLT after it’s written. Then the GLT behaves just like the code I specified before on e.g. Newcomb’s Problem—one-box if Omega glimpses the GLT or gets evidence about it after the GLT was written, two-box if Omega perfectly knows your code 5 seconds before the GLT gets written.
[Edit: Don’t bother responding to this yet. I need to think this through.]
I’m not sure this question makes sense. Can you give an example?
S should take the programmer R’s prior and memories/sensory data at the time of coding, and compute a posterior probability distribution using them (assuming it would do a better job at this than R). Then use that to compute R’s expected utility for the purpose of computing the optimal GLT. This falls out of the idea that S is trying to approximate what the GLT would be if R had logical omniscience.
No, S will do it.
I guess both, but I don’t understand the significance of this question.