Are you claiming that the problem arises when the agent tries to predict its own behavior, or when the predictor tries to predict the agent’s behavior? Either way, I don’t think this makes Newcomb incoherent. Even if the agent can’t solve the halting problem in general, there are programs that can solve it in specific cases, including for themselves. And the predictor can be assumed to have greater computational resources than the agent, e.g. it can run for longer, or has a halting oracle if you really want the type of the agent to be ‘general Turing machine’, which means it can avoid self-reference paradoxes.
We could require that both the agent and the predictor are machines that always halt.
However to think about Newcomb’s problem entails “casting yourself” as the agent and predictor both, with a theoretically unlimited amount of time to consider strategies for the agent to defeat the predictor, as well as for the predictor to defeat the agent.
That’s just shifting the goalposts. Now you are predicting the behavior of both the agent and predictor. If you could create an agent capable of defeating the predictor, you’d have to adjust the predictor. If you could create a predictor capable of defeating the agent, you’d have to resume proposing strategies for the agent.
You are now the machine trying to simulate your own future behavior with how you modify the agent and predictor. And there is no requirement that you take a finite amount of time, or use a finite amount of computing power, when considering the problem. For example, the problem does not say “come up with the best strategy for the agent and predictor you can within X minutes.”
Hence, we have two equally uninteresting cases:
The agent/thinker are limited in the time or computational resources available to them, while the predictor is unlimited.
The agent/thinker and predictor are both unlimited in time and computational resources, and both must be continuously and forever modified to try and defeat each others’ strategies. They are leapfrogging up the oracle hierarchy, forever. Newcomb’s problem invites you to try and compute where they’ll end up, and the answer is undecidable, a loop.
However to think about Newcomb’s problem entails “casting yourself” as the agent and predictor both, with a theoretically unlimited amount of time to consider strategies for the agent to defeat the predictor, as well as for the predictor to defeat the agent.
I don’t think so. Newcomb’s problem is meant to be a simple situation where an agent must act in an environment more computationally powerful than itself. The perspective is very much meant to be that of the agent. If you think that figuring out how to act in an environment more powerful than yourself is uninteresting, you must be pretty bored, since that describes the situation all of us find ourselves in.
The agent/thinker are limited in the time or computational resources available to them, while the predictor is unlimited.
My understanding is that this is generally situation which is meant. Well, not necessarily unlimited, just with enough resources to predict the behavior of the agent.
I don’t see why you call this situation uninteresting.
Are you claiming that the problem arises when the agent tries to predict its own behavior, or when the predictor tries to predict the agent’s behavior? Either way, I don’t think this makes Newcomb incoherent. Even if the agent can’t solve the halting problem in general, there are programs that can solve it in specific cases, including for themselves. And the predictor can be assumed to have greater computational resources than the agent, e.g. it can run for longer, or has a halting oracle if you really want the type of the agent to be ‘general Turing machine’, which means it can avoid self-reference paradoxes.
We could require that both the agent and the predictor are machines that always halt.
However to think about Newcomb’s problem entails “casting yourself” as the agent and predictor both, with a theoretically unlimited amount of time to consider strategies for the agent to defeat the predictor, as well as for the predictor to defeat the agent.
That’s just shifting the goalposts. Now you are predicting the behavior of both the agent and predictor. If you could create an agent capable of defeating the predictor, you’d have to adjust the predictor. If you could create a predictor capable of defeating the agent, you’d have to resume proposing strategies for the agent.
You are now the machine trying to simulate your own future behavior with how you modify the agent and predictor. And there is no requirement that you take a finite amount of time, or use a finite amount of computing power, when considering the problem. For example, the problem does not say “come up with the best strategy for the agent and predictor you can within X minutes.”
Hence, we have two equally uninteresting cases:
The agent/thinker are limited in the time or computational resources available to them, while the predictor is unlimited.
The agent/thinker and predictor are both unlimited in time and computational resources, and both must be continuously and forever modified to try and defeat each others’ strategies. They are leapfrogging up the oracle hierarchy, forever. Newcomb’s problem invites you to try and compute where they’ll end up, and the answer is undecidable, a loop.
I don’t think so. Newcomb’s problem is meant to be a simple situation where an agent must act in an environment more computationally powerful than itself. The perspective is very much meant to be that of the agent. If you think that figuring out how to act in an environment more powerful than yourself is uninteresting, you must be pretty bored, since that describes the situation all of us find ourselves in.
My understanding is that this is generally situation which is meant. Well, not necessarily unlimited, just with enough resources to predict the behavior of the agent.
I don’t see why you call this situation uninteresting.