We could require that both the agent and the predictor are machines that always halt.
However to think about Newcomb’s problem entails “casting yourself” as the agent and predictor both, with a theoretically unlimited amount of time to consider strategies for the agent to defeat the predictor, as well as for the predictor to defeat the agent.
That’s just shifting the goalposts. Now you are predicting the behavior of both the agent and predictor. If you could create an agent capable of defeating the predictor, you’d have to adjust the predictor. If you could create a predictor capable of defeating the agent, you’d have to resume proposing strategies for the agent.
You are now the machine trying to simulate your own future behavior with how you modify the agent and predictor. And there is no requirement that you take a finite amount of time, or use a finite amount of computing power, when considering the problem. For example, the problem does not say “come up with the best strategy for the agent and predictor you can within X minutes.”
Hence, we have two equally uninteresting cases:
The agent/thinker are limited in the time or computational resources available to them, while the predictor is unlimited.
The agent/thinker and predictor are both unlimited in time and computational resources, and both must be continuously and forever modified to try and defeat each others’ strategies. They are leapfrogging up the oracle hierarchy, forever. Newcomb’s problem invites you to try and compute where they’ll end up, and the answer is undecidable, a loop.
However to think about Newcomb’s problem entails “casting yourself” as the agent and predictor both, with a theoretically unlimited amount of time to consider strategies for the agent to defeat the predictor, as well as for the predictor to defeat the agent.
I don’t think so. Newcomb’s problem is meant to be a simple situation where an agent must act in an environment more computationally powerful than itself. The perspective is very much meant to be that of the agent. If you think that figuring out how to act in an environment more powerful than yourself is uninteresting, you must be pretty bored, since that describes the situation all of us find ourselves in.
The agent/thinker are limited in the time or computational resources available to them, while the predictor is unlimited.
My understanding is that this is generally situation which is meant. Well, not necessarily unlimited, just with enough resources to predict the behavior of the agent.
I don’t see why you call this situation uninteresting.
We could require that both the agent and the predictor are machines that always halt.
However to think about Newcomb’s problem entails “casting yourself” as the agent and predictor both, with a theoretically unlimited amount of time to consider strategies for the agent to defeat the predictor, as well as for the predictor to defeat the agent.
That’s just shifting the goalposts. Now you are predicting the behavior of both the agent and predictor. If you could create an agent capable of defeating the predictor, you’d have to adjust the predictor. If you could create a predictor capable of defeating the agent, you’d have to resume proposing strategies for the agent.
You are now the machine trying to simulate your own future behavior with how you modify the agent and predictor. And there is no requirement that you take a finite amount of time, or use a finite amount of computing power, when considering the problem. For example, the problem does not say “come up with the best strategy for the agent and predictor you can within X minutes.”
Hence, we have two equally uninteresting cases:
The agent/thinker are limited in the time or computational resources available to them, while the predictor is unlimited.
The agent/thinker and predictor are both unlimited in time and computational resources, and both must be continuously and forever modified to try and defeat each others’ strategies. They are leapfrogging up the oracle hierarchy, forever. Newcomb’s problem invites you to try and compute where they’ll end up, and the answer is undecidable, a loop.
I don’t think so. Newcomb’s problem is meant to be a simple situation where an agent must act in an environment more computationally powerful than itself. The perspective is very much meant to be that of the agent. If you think that figuring out how to act in an environment more powerful than yourself is uninteresting, you must be pretty bored, since that describes the situation all of us find ourselves in.
My understanding is that this is generally situation which is meant. Well, not necessarily unlimited, just with enough resources to predict the behavior of the agent.
I don’t see why you call this situation uninteresting.