except again it means that there are some strategies I cannot execute.
I don’t see how omega running his simulation on a timer makes any difference for this, but either way this is normal and expected. Problem resolved.
I thought the assumption is that I am a perfect reasoner and can execute any strategy.
Not at all. Though it may be convenient to postulate arbitrarily large computing power (as long as Omega’s power is increased to match) so that we can consider brute force algorithms instead of having to also worry about how to make it efficient.
(Actually, if you look at the decision tree for Newcomb’s, the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”, with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega. And indeed the decision tree explicitly states that your state of knowledge is identical whether omega fills or doesn’t fill the box.)
I don’t see how omega running his simulation on a timer makes any difference for this,
It’s me who has to run on a timer. If I am only permitted to execute 1000 instructions to decide what my answer is, I may not be able to simulate Omega.
Though it may be convenient to postulate arbitrarily large computing power
Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.
the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”, with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega
I know what problem Omega is trying to solve. If I am a perfect reasoner, and I know that Omega is, I should be able to predict Omega without actually having knowledge of Omega’s internals.
Actually, if you look at the decision tree for Newcomb’s, the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”,
Deciding which branch of the decision tree to pick is something I do using a process that has, as a step, simulating Omega. It is tempting to say “it doesn’t matter what process you use to choose a branch of the decision tree, each branch has a value that can be compared independently of why you chose the branch”, but that’s not correct. In the original problem, if I just compare the branches without considering Omega’s predictions, I should always two-box. If I consider Omega’s predictions, that cuts off some branches in a way which changes the relative ranking of the choices. If I consider my predictions of Omega’s predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.
But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match. The fact that Omega is vastly more intelligent and computationally powerful than you is a fundamental premise of the problem. This is what stops you from magically “predicting him”.
Look, in Newcomb’s problem you are not supposed to be a “perfect reasoner” with infinite computing time or whatever. You are just a human. Omega is the superintelligence. So, any argument you make that is premised on being a perfect reasoner is automatically irrelevant and inapplicable. Do you have a point that is not based on this misunderstanding of the thought experiment? What is your point, even?
But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match.
It’s already arbitrary large. You want that expanded to match arbitrarily large?
Look, in Newcomb’s problem you are not supposed to be a “perfect reasoner”
Asking “which box should you pick” implies that you can follow a chain of reasoning which outputs an answer about which box to pick.
It sounds like your decision making strategy fails to produce a useful result.
My decision making strategy is “figure out what Omega did and do the opposite”. It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting). And Omega goes first, so we never get to the point where I try my decision strategy and don’t halt.
(And if you’re going to respond with “then Omega knows in advance that your decision strategy doesn’t halt”, how’s he going to know that?)
Furthermore, there’s always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega’s choice was.
What is your point, even?
That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.
It’s already arbitrary large. You want that expanded to match arbitrarily large?
When I say “arbitrarily large” I do not mean infinite. You have some fixed computing power, X (which you can interpret as “memory size” or “number of computations you can do before the sun explodes the next day” or whatever). The premise of newcomb’s is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.
Asking “which box should you pick” implies that you can follow a chain of reasoning which outputs an answer about which box to pick.
Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.
My decision making strategy is “figure out what Omega did and do the opposite”. It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting).
Two points: If Omega’s memory is Q times large than yours, you can’t fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn’t give a result before the sun explodes, so leaves both boxes empty and flies away to safety.
That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.
Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you’ll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.
In other words, if “Omega isn’t a perfect predictor” means that he can’t simulate a physical system for an infinite number of steps in finite time then I agree but don’t give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.
Furthermore, there’s always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega’s choice was.
This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.
If I consider my predictions of Omega’s predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.
“Ha! What if I don’t choose One box OR Two boxes! I can choose No Boxes out of indecision instead!” isn’t a particularly useful objection.
No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn’t care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.
When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It’s a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.
There’s your answer.
I don’t see how omega running his simulation on a timer makes any difference for this, but either way this is normal and expected. Problem resolved.
Not at all. Though it may be convenient to postulate arbitrarily large computing power (as long as Omega’s power is increased to match) so that we can consider brute force algorithms instead of having to also worry about how to make it efficient.
(Actually, if you look at the decision tree for Newcomb’s, the intended options for your strategy are clearly supposed to be “unconditionally one-box” and “unconditionally two-box”, with potentially a mixed strategy allowed. Which is why you are provided wth no information whatsoever that would allow you to predict omega. And indeed the decision tree explicitly states that your state of knowledge is identical whether omega fills or doesn’t fill the box.)
It’s me who has to run on a timer. If I am only permitted to execute 1000 instructions to decide what my answer is, I may not be able to simulate Omega.
Yes, I am assuming that I am capable of executing arbitrarily many instructions when computing my strategy.
I know what problem Omega is trying to solve. If I am a perfect reasoner, and I know that Omega is, I should be able to predict Omega without actually having knowledge of Omega’s internals.
Deciding which branch of the decision tree to pick is something I do using a process that has, as a step, simulating Omega. It is tempting to say “it doesn’t matter what process you use to choose a branch of the decision tree, each branch has a value that can be compared independently of why you chose the branch”, but that’s not correct. In the original problem, if I just compare the branches without considering Omega’s predictions, I should always two-box. If I consider Omega’s predictions, that cuts off some branches in a way which changes the relative ranking of the choices. If I consider my predictions of Omega’s predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.
But apparently you want to ignore the part when I said Omega has to have his own computing power increased to match. The fact that Omega is vastly more intelligent and computationally powerful than you is a fundamental premise of the problem. This is what stops you from magically “predicting him”.
Look, in Newcomb’s problem you are not supposed to be a “perfect reasoner” with infinite computing time or whatever. You are just a human. Omega is the superintelligence. So, any argument you make that is premised on being a perfect reasoner is automatically irrelevant and inapplicable. Do you have a point that is not based on this misunderstanding of the thought experiment? What is your point, even?
It’s already arbitrary large. You want that expanded to match arbitrarily large?
Asking “which box should you pick” implies that you can follow a chain of reasoning which outputs an answer about which box to pick.
My decision making strategy is “figure out what Omega did and do the opposite”. It only fails to produce a useful result if Omega fails to produce a useful result (perhaps by trying to predict me and not halting). And Omega goes first, so we never get to the point where I try my decision strategy and don’t halt.
(And if you’re going to respond with “then Omega knows in advance that your decision strategy doesn’t halt”, how’s he going to know that?)
Furthermore, there’s always the transparent boxes situation. Instead of explicitly simulating Omega, I implicitly simulate Omega by looking in the transparent boxes and determining what Omega’s choice was.
That Omega cannot be a perfect predictor because being one no matter what strategy the human uses would imply being able to solve the halting problem.
When I say “arbitrarily large” I do not mean infinite. You have some fixed computing power, X (which you can interpret as “memory size” or “number of computations you can do before the sun explodes the next day” or whatever). The premise of newcomb’s is that Omega has some fixed computing power Q * X, where Q is really really extremely large. You can increase X as much as you like, as long as Omega is still Q times smarter.
Which does not even remotely imply being a perfect reasoner. An ordinary human is capable of doing this just fine.
Two points: If Omega’s memory is Q times large than yours, you can’t fit a simulation of him in your head. So predicting by simulation is not going to work. Second, If Omega has Q times as much computing time as you, you can try to predict him (by any method) for X steps, at which point the sun explodes. Naturally, Omega simulates you for X steps, notices that you didn’t give a result before the sun explodes, so leaves both boxes empty and flies away to safety.
Only under the artificial irrelevant-to-the-thought-experiment conditions that require him to care whether you’ll one-box or two-box after standing in front of the boxes for millions of years thinking about it. Whether or not the sun explodes, or Omega himself imposes a time limit, a realistic Omega only simulates for X steps, then stops. No halting-problem-solving involved.
In other words, if “Omega isn’t a perfect predictor” means that he can’t simulate a physical system for an infinite number of steps in finite time then I agree but don’t give a shit. Such a thing is entirely unneccessary. In the thought experiment, if you are a human, you die of aging after less than 100 years. And any strategy that involves you thinking in front of the boxes until you die of aging (or starvation, for that matter) is clearly flawed anyway.
This example is less stupid since it is not based on trying to circularly predict yourself. But in this case Omega just makes action-conditional predictions and fills the boxes however he likes.
It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.
“Ha! What if I don’t choose One box OR Two boxes! I can choose No Boxes out of indecision instead!” isn’t a particularly useful objection.
No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn’t care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.
When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It’s a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.