It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.
Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem. Genetic Newcomb requires that Omega look for the gene, something which he can always do. The “regular equivalent” of genetic Newcomb is that Omega looks at the decision maker’s source code, but it so happens that most decision makers work in ways which are easy to analyze.
“Predict what Omega thinks you’ll do, then do the opposite”.
Which is really what the halting problem amounts to anyway, except that it’s not going to be spelled out; it’s going to be something that is equivalent to that but in a nonobvious way.
Saying “Omega will determine what the agent outputs by reading the agent’s source code”, is going to implicate the halting problem.
Predict what Omega thinks you’ll do, then do the opposite
I don’t know if that is possible given Unknowns’ constraints. Upthread Unknowns defined this variant of Newcomb as:
Let’s say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.
Since the player is not allowed to look at its own (or, presumably, Omega’s) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns’ restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb’s paradox.
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to “predict Omega”. The AI doesn’t have to actually look at its own source code to do things that are equivalent to looking at its own source code—that’s how the halting problem works!
If Omega is not a program and can do things that a program can’t do, then this isn’t true,. but I am skeptical that such an Omega is a meaningful concept.
Of course, the qualifier “deterministic” means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can’t help Omega do any better.
Now that I think of it, it depends on exactly what it means for Omega to tell that you have a gene for two-boxing. If Omega has the equivalent of a textbook saying “gene AGTGCGTTACT leads to two-boxing” or if the gene produces a brain that is incapable of one-boxing at all in the same way that genes produce lungs that are incapable of breathing water, then what I said applies. If it’s a gene for two-boxing because it causes the bearer to produce a specific chain of reasoning, and Omega knows it’s a two-boxing gene because Omega has analyzed the chain and figured out that it leads to two-boxing, then there actually is no difference.
(This is complicated by the fact that the problem states that having the gene is statistically associated with two-boxing, which is neither of those. If the gene is only statistically associated with two-boxing, it might be that the gene makes the bearer likely to two-box in ways that are not implicated if the bearer reasons the problem out in full logical detail.)
Actually, there’s another difference. The original Newcomb problem implies that it is possible for you figure out the correct answer. With genetic Newcomb, it may be impossible for you to figure out the correct answer.
It is true that having your decision determined by your genes is similar to having your decision determined by the algorithm you are executing. However, we know that both sets of genes can happen, but if your decision is determined by the algorithm you are using, certain algorithms may be contradictory and cannot happen. (Consider an algorithm that predicts what Omega will do and acts based on that prediction.) Although now that I think of it, that’s pretty much the halting problem objection anyway.
It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.
Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem. Genetic Newcomb requires that Omega look for the gene, something which he can always do. The “regular equivalent” of genetic Newcomb is that Omega looks at the decision maker’s source code, but it so happens that most decision makers work in ways which are easy to analyze.
How so? I have not been able to come up with a valid decision algorithm that would require Omega to solve the halting problem. Do you have an example?
“Predict what Omega thinks you’ll do, then do the opposite”.
Which is really what the halting problem amounts to anyway, except that it’s not going to be spelled out; it’s going to be something that is equivalent to that but in a nonobvious way.
Saying “Omega will determine what the agent outputs by reading the agent’s source code”, is going to implicate the halting problem.
I don’t know if that is possible given Unknowns’ constraints. Upthread Unknowns defined this variant of Newcomb as:
Since the player is not allowed to look at its own (or, presumably, Omega’s) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns’ restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb’s paradox.
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to “predict Omega”. The AI doesn’t have to actually look at its own source code to do things that are equivalent to looking at its own source code—that’s how the halting problem works!
If Omega is not a program and can do things that a program can’t do, then this isn’t true,. but I am skeptical that such an Omega is a meaningful concept.
Of course, the qualifier “deterministic” means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can’t help Omega do any better.
Now that I think of it, it depends on exactly what it means for Omega to tell that you have a gene for two-boxing. If Omega has the equivalent of a textbook saying “gene AGTGCGTTACT leads to two-boxing” or if the gene produces a brain that is incapable of one-boxing at all in the same way that genes produce lungs that are incapable of breathing water, then what I said applies. If it’s a gene for two-boxing because it causes the bearer to produce a specific chain of reasoning, and Omega knows it’s a two-boxing gene because Omega has analyzed the chain and figured out that it leads to two-boxing, then there actually is no difference.
(This is complicated by the fact that the problem states that having the gene is statistically associated with two-boxing, which is neither of those. If the gene is only statistically associated with two-boxing, it might be that the gene makes the bearer likely to two-box in ways that are not implicated if the bearer reasons the problem out in full logical detail.)
Actually, there’s another difference. The original Newcomb problem implies that it is possible for you figure out the correct answer. With genetic Newcomb, it may be impossible for you to figure out the correct answer.
It is true that having your decision determined by your genes is similar to having your decision determined by the algorithm you are executing. However, we know that both sets of genes can happen, but if your decision is determined by the algorithm you are using, certain algorithms may be contradictory and cannot happen. (Consider an algorithm that predicts what Omega will do and acts based on that prediction.) Although now that I think of it, that’s pretty much the halting problem objection anyway.