You have a system, that can predict perfectly what you will do in the future. It presents you with two opaque boxes. If you take both boxes, then it will place in one box 10$ and in the other 0$. If you will take only one box, then it will place in one box 10$ and in the other 1,000,000$. The system does not use its predictive power to predict which box you will choose, but only to determine if you choose one or two boxes. It uses a random number generator to determine where to place which amount of dollars.
This is a modified version of Newcomb’s problem.
Imagine that you are an agent that can reliably pre-commit to an action. Now imagine you pre-commit to taking only one box in such a way, that it makes it impossible for you to not uphold that commitment. Now if you choose a box, and get 10$, you know that the other box contains 1,000,000$ for sure.
The interesting thing is that you can end up in a scenario where you actually know that the other box contains 1,000,000$ for sure. The one that you did not pick. Although you can’t take it because of the pre-commitment mechanism. And this pre-commitment mechanism is the only thing that prevents you from taking it. The thing that I found interesting is that such a situation can arise.
You have a system, that can predict perfectly what you will do in the future.
In fact, I do not. This (like Newcomb) doesn’t tell me anything about the world.
Also of course there is no system in reality that can predict you perfectly, but this is about an idealised scenario that is relevant because there are systems that can predict you with more than 50% accuracy.
Although you can’t take it because of the pre-commitment mechanism.
This is a crux for me. In such worlds where this prediction is possible, you can no longer say “because of” and really know that’s true. I suspect the precommittment mechanism is the way you KNOW that you can’t take the box, but it’s not why you can’t take the box.
I don’t really get that. For example, you could put a cryptographic lock on the box (let’s assume there is no way around it without the key), and then throw away the key. It seems that now you actually are not able to access the box, because you do not have the key. And you can also at the same time know that this is the case.
Sure, there are any number of commitment mechanisms which would be hard (or NP-hard) to bypass. If the prediction and box-content selection was performed by Omega based on that cause, then fine. If instead, it was based on a more complete modeling of the universe, REGARDLESS of whether the visible mechanism “could” be bypassed, then there are other causes than that mechanism.
There could be but there does not need to be, I would say. Or maybe I really do not get what you are talking about. It could really be that if the cryptographic lock was not in place, that then you could take the box, and there is nothing else that prevents you from doing this. I guess I have an implicit model where I look at the world from a cartesian perspective. So is what you’re saying about counterfactuals, and that I am using them in a way that is not valid, and that I do not acknowledge this?
I think my main point is that “because” is a tricky word to use normally, and gets downright weird in a universe that includes Omega levels of predictions about actions that feel “free” from the agent.
If Omega made the prediction, that means Omega sees the actual future, regardless of causality or intent or agent-visible commitment mechanisms.
Newcomb: Can’t do whats optimal
You have a system, that can predict perfectly what you will do in the future. It presents you with two opaque boxes. If you take both boxes, then it will place in one box 10$ and in the other 0$. If you will take only one box, then it will place in one box 10$ and in the other 1,000,000$. The system does not use its predictive power to predict which box you will choose, but only to determine if you choose one or two boxes. It uses a random number generator to determine where to place which amount of dollars.
This is a modified version of Newcomb’s problem.
Imagine that you are an agent that can reliably pre-commit to an action. Now imagine you pre-commit to taking only one box in such a way, that it makes it impossible for you to not uphold that commitment. Now if you choose a box, and get 10$, you know that the other box contains 1,000,000$ for sure.
In fact, I do not. This (like Newcomb) doesn’t tell me anything about the world.
In this set-up, what does the pre-commitment imagination do for us? The system predicts correctly whether I pre-commit or not, right?
The interesting thing is that you can end up in a scenario where you actually know that the other box contains 1,000,000$ for sure. The one that you did not pick. Although you can’t take it because of the pre-commitment mechanism. And this pre-commitment mechanism is the only thing that prevents you from taking it. The thing that I found interesting is that such a situation can arise.
Also of course there is no system in reality that can predict you perfectly, but this is about an idealised scenario that is relevant because there are systems that can predict you with more than 50% accuracy.
This is a crux for me. In such worlds where this prediction is possible, you can no longer say “because of” and really know that’s true. I suspect the precommittment mechanism is the way you KNOW that you can’t take the box, but it’s not why you can’t take the box.
I don’t really get that. For example, you could put a cryptographic lock on the box (let’s assume there is no way around it without the key), and then throw away the key. It seems that now you actually are not able to access the box, because you do not have the key. And you can also at the same time know that this is the case.
Not sure why this should be impossible to say.
Sure, there are any number of commitment mechanisms which would be hard (or NP-hard) to bypass. If the prediction and box-content selection was performed by Omega based on that cause, then fine. If instead, it was based on a more complete modeling of the universe, REGARDLESS of whether the visible mechanism “could” be bypassed, then there are other causes than that mechanism.
There could be but there does not need to be, I would say. Or maybe I really do not get what you are talking about. It could really be that if the cryptographic lock was not in place, that then you could take the box, and there is nothing else that prevents you from doing this. I guess I have an implicit model where I look at the world from a cartesian perspective. So is what you’re saying about counterfactuals, and that I am using them in a way that is not valid, and that I do not acknowledge this?
I think my main point is that “because” is a tricky word to use normally, and gets downright weird in a universe that includes Omega levels of predictions about actions that feel “free” from the agent.
If Omega made the prediction, that means Omega sees the actual future, regardless of causality or intent or agent-visible commitment mechanisms.