The general mistake that many people are making here is to think that determinism makes a difference. It does not.
Let’s say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.
I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.
Note that determinism is irrelevant. If a program couldn’t use a decision theory or couldn’t make a choice, just because it is a determinate program, then no AI will ever work in the real world, and there is no reason that people should work in the real world either.
Also note that the only good decision in these cases is to one-box, even though the programs are 100% determinate.
You’re describing regular Newcomb, not this gene version. (Also note that Omega needs to have more processing power than the programs to do what you want it to do, just like the human version.) The analogue would be defining a short program that Omega will run over the AIs code, that predicts what the AI will output correctly 99% of the time. Then it becomes a question of whether any given AI can outwit the program. If an AI thinks the program won’t work on it, for whatever reason (by which I mean “conditioning on myself picking X doesn’t cause my estimate of the prediction program outputting X to change, and vice-versa”), it’s free to choose whatever it wants to.
Getting back to humans, I submit that a certain class of people that actually think about the problem will induce a far greater failure rate in Omega, and that therefore that severs the causal link between my decision and Omega’s, in the same way as an AI might be able to predict that the prediction program won’t work on it.
As I said elsewhere, were this incorrect, my position would change, but then you probably aren’t talking about “genes” anymore. You shouldn’t be able to get 100% prediction rates from only genes.
It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.
Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem. Genetic Newcomb requires that Omega look for the gene, something which he can always do. The “regular equivalent” of genetic Newcomb is that Omega looks at the decision maker’s source code, but it so happens that most decision makers work in ways which are easy to analyze.
“Predict what Omega thinks you’ll do, then do the opposite”.
Which is really what the halting problem amounts to anyway, except that it’s not going to be spelled out; it’s going to be something that is equivalent to that but in a nonobvious way.
Saying “Omega will determine what the agent outputs by reading the agent’s source code”, is going to implicate the halting problem.
Predict what Omega thinks you’ll do, then do the opposite
I don’t know if that is possible given Unknowns’ constraints. Upthread Unknowns defined this variant of Newcomb as:
Let’s say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.
Since the player is not allowed to look at its own (or, presumably, Omega’s) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns’ restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb’s paradox.
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to “predict Omega”. The AI doesn’t have to actually look at its own source code to do things that are equivalent to looking at its own source code—that’s how the halting problem works!
If Omega is not a program and can do things that a program can’t do, then this isn’t true,. but I am skeptical that such an Omega is a meaningful concept.
Of course, the qualifier “deterministic” means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can’t help Omega do any better.
Now that I think of it, it depends on exactly what it means for Omega to tell that you have a gene for two-boxing. If Omega has the equivalent of a textbook saying “gene AGTGCGTTACT leads to two-boxing” or if the gene produces a brain that is incapable of one-boxing at all in the same way that genes produce lungs that are incapable of breathing water, then what I said applies. If it’s a gene for two-boxing because it causes the bearer to produce a specific chain of reasoning, and Omega knows it’s a two-boxing gene because Omega has analyzed the chain and figured out that it leads to two-boxing, then there actually is no difference.
(This is complicated by the fact that the problem states that having the gene is statistically associated with two-boxing, which is neither of those. If the gene is only statistically associated with two-boxing, it might be that the gene makes the bearer likely to two-box in ways that are not implicated if the bearer reasons the problem out in full logical detail.)
Actually, there’s another difference. The original Newcomb problem implies that it is possible for you figure out the correct answer. With genetic Newcomb, it may be impossible for you to figure out the correct answer.
It is true that having your decision determined by your genes is similar to having your decision determined by the algorithm you are executing. However, we know that both sets of genes can happen, but if your decision is determined by the algorithm you are using, certain algorithms may be contradictory and cannot happen. (Consider an algorithm that predicts what Omega will do and acts based on that prediction.) Although now that I think of it, that’s pretty much the halting problem objection anyway.
I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.
The general mistake that many people are making here is to think that determinism makes a difference. It does not.
Let’s say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.
I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.
Note that determinism is irrelevant. If a program couldn’t use a decision theory or couldn’t make a choice, just because it is a determinate program, then no AI will ever work in the real world, and there is no reason that people should work in the real world either.
Also note that the only good decision in these cases is to one-box, even though the programs are 100% determinate.
You’re describing regular Newcomb, not this gene version. (Also note that Omega needs to have more processing power than the programs to do what you want it to do, just like the human version.) The analogue would be defining a short program that Omega will run over the AIs code, that predicts what the AI will output correctly 99% of the time. Then it becomes a question of whether any given AI can outwit the program. If an AI thinks the program won’t work on it, for whatever reason (by which I mean “conditioning on myself picking X doesn’t cause my estimate of the prediction program outputting X to change, and vice-versa”), it’s free to choose whatever it wants to.
Getting back to humans, I submit that a certain class of people that actually think about the problem will induce a far greater failure rate in Omega, and that therefore that severs the causal link between my decision and Omega’s, in the same way as an AI might be able to predict that the prediction program won’t work on it.
As I said elsewhere, were this incorrect, my position would change, but then you probably aren’t talking about “genes” anymore. You shouldn’t be able to get 100% prediction rates from only genes.
It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.
Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem. Genetic Newcomb requires that Omega look for the gene, something which he can always do. The “regular equivalent” of genetic Newcomb is that Omega looks at the decision maker’s source code, but it so happens that most decision makers work in ways which are easy to analyze.
How so? I have not been able to come up with a valid decision algorithm that would require Omega to solve the halting problem. Do you have an example?
“Predict what Omega thinks you’ll do, then do the opposite”.
Which is really what the halting problem amounts to anyway, except that it’s not going to be spelled out; it’s going to be something that is equivalent to that but in a nonobvious way.
Saying “Omega will determine what the agent outputs by reading the agent’s source code”, is going to implicate the halting problem.
I don’t know if that is possible given Unknowns’ constraints. Upthread Unknowns defined this variant of Newcomb as:
Since the player is not allowed to look at its own (or, presumably, Omega’s) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns’ restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb’s paradox.
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to “predict Omega”. The AI doesn’t have to actually look at its own source code to do things that are equivalent to looking at its own source code—that’s how the halting problem works!
If Omega is not a program and can do things that a program can’t do, then this isn’t true,. but I am skeptical that such an Omega is a meaningful concept.
Of course, the qualifier “deterministic” means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can’t help Omega do any better.
Now that I think of it, it depends on exactly what it means for Omega to tell that you have a gene for two-boxing. If Omega has the equivalent of a textbook saying “gene AGTGCGTTACT leads to two-boxing” or if the gene produces a brain that is incapable of one-boxing at all in the same way that genes produce lungs that are incapable of breathing water, then what I said applies. If it’s a gene for two-boxing because it causes the bearer to produce a specific chain of reasoning, and Omega knows it’s a two-boxing gene because Omega has analyzed the chain and figured out that it leads to two-boxing, then there actually is no difference.
(This is complicated by the fact that the problem states that having the gene is statistically associated with two-boxing, which is neither of those. If the gene is only statistically associated with two-boxing, it might be that the gene makes the bearer likely to two-box in ways that are not implicated if the bearer reasons the problem out in full logical detail.)
Actually, there’s another difference. The original Newcomb problem implies that it is possible for you figure out the correct answer. With genetic Newcomb, it may be impossible for you to figure out the correct answer.
It is true that having your decision determined by your genes is similar to having your decision determined by the algorithm you are executing. However, we know that both sets of genes can happen, but if your decision is determined by the algorithm you are using, certain algorithms may be contradictory and cannot happen. (Consider an algorithm that predicts what Omega will do and acts based on that prediction.) Although now that I think of it, that’s pretty much the halting problem objection anyway.
Omega can solve the halting problem?