“Predict what Omega thinks you’ll do, then do the opposite”.
Which is really what the halting problem amounts to anyway, except that it’s not going to be spelled out; it’s going to be something that is equivalent to that but in a nonobvious way.
Saying “Omega will determine what the agent outputs by reading the agent’s source code”, is going to implicate the halting problem.
Predict what Omega thinks you’ll do, then do the opposite
I don’t know if that is possible given Unknowns’ constraints. Upthread Unknowns defined this variant of Newcomb as:
Let’s say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.
Since the player is not allowed to look at its own (or, presumably, Omega’s) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns’ restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb’s paradox.
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to “predict Omega”. The AI doesn’t have to actually look at its own source code to do things that are equivalent to looking at its own source code—that’s how the halting problem works!
If Omega is not a program and can do things that a program can’t do, then this isn’t true,. but I am skeptical that such an Omega is a meaningful concept.
Of course, the qualifier “deterministic” means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can’t help Omega do any better.
“Predict what Omega thinks you’ll do, then do the opposite”.
Which is really what the halting problem amounts to anyway, except that it’s not going to be spelled out; it’s going to be something that is equivalent to that but in a nonobvious way.
Saying “Omega will determine what the agent outputs by reading the agent’s source code”, is going to implicate the halting problem.
I don’t know if that is possible given Unknowns’ constraints. Upthread Unknowns defined this variant of Newcomb as:
Since the player is not allowed to look at its own (or, presumably, Omega’s) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns’ restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb’s paradox.
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to “predict Omega”. The AI doesn’t have to actually look at its own source code to do things that are equivalent to looking at its own source code—that’s how the halting problem works!
If Omega is not a program and can do things that a program can’t do, then this isn’t true,. but I am skeptical that such an Omega is a meaningful concept.
Of course, the qualifier “deterministic” means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can’t help Omega do any better.