I take the box being opaque to mean that the contents of the box do not affect my sensory input,
Yep, that’s what the box being opaque means—the contents of the box have no causal effect on your perceptions.
and by extension that I don’t get to e.g. watch a video of omega putting money in the box, or do some forensic equivalent.
Nope. Watching the video would contradict this principle as well, because you would still effectively be seeing the contents of the box.
What IS allowed by Newcomb’s problem, however, is coming to the conclusion that the contents of the box and your perceptions of Omega have a common cause in terms of how Omega functions or acts. You are then free to use that reasoning to work out what the contents of the box could be.
Your interpretation of Newcomb’s problem basically makes it incoherent. For example, let’s say I’m a CDT agent and I believe Omega predicted me correctly. Then, at the moment I make my decision to two-box, but before I actually see the contents of the opaque box, I already know that the opaque box is empty. Does this mean that the box is not “opaque”, by your reasoning?
Really? What if Omega is a program, which you know predicts outputs of other simple programs written in C++, Java, and Python, and it been fed your raw DNA as a description, ’cause you’re human?
If I don’t think Omega is able to predict me, then it’s not Newcomb’s problem, is it? Even if we assume that the Omega program is capable of predicting humans, DNA is not that likely to be sufficient evidence for it to be able to make good predictions.
What if you just know the exact logic Omega is using?
Well, then it obviously depends on what that exact logic is.
I think you’re just describing a case where AIXI fails to learn anything from other agents because they’re too different from the AIXI. What’s about my scenario where AIXI plays Newcomb’s multiple times, sometimes wanting more money and sometimes not? The program reading a_5 also appears to work right.
First of all, as I said previously, if AIXI doesn’t want the money then the scenario is not Newcomb’s. Also, I don’t think the a_5 reading program will end up being the simplest explanation even in that scenario. The program would need to use something like a_5, a_67, a_166, a_190 and a_222 in each instance of Newcomb’s problem respectively. Rather than a world program with a generic “get inputs from AIXI” subroutine, you need a world program with a “recognize Newcomblike problems and use the appropriate bits” subroutine; there is still a complexity penalty.
Unless you’re trying to make a setup in which Omega necessarily works by magic, then given sufficient evidence of reality at large magic is always going to be penalised. Given that reality at large works in a non-magical way, explanations that bootstrap your model of reality at large are always going to be simpler than explanations that have to add extraneous elements of “magic” to the model.
Besides, if Omega is just plain magical, then Newcomb’s problem boils down to “is a million bigger than a thousand?”
Well, given that predictors for AIXI are non existent, that should be the case.
Of course there can be predictors for AIXI. I can, for example, predict with a high degree of confidence that if AIXI knows what chess is and it wants to beat me at chess, it’s going to beat me. Also, if AIXI wants to maximise paperclips, I can easily predict that there are going to be a lot of paperclips.
edit: actually, what’s your reasons for one-boxing?
By being the kind of person who one-boxes, I end up with a million dollars instead of a thousand.
edit2: also I think this way of seeing the world—where your actions are entirely unlinked to the past—is a western phenomenon, some free will philosophy stuff. A quarter of my cultural background is quite fatalist in the outlook, so I see my decisions as the consequences of the laws of physics acting on the initial world state, and given same ‘random noise’, different decision by me implies both different future and different past.
Um, the “libertarian free will” perspective is mostly what I’m arguing against here. The whole problem with CDT is that it takes that perspective, and, in concluding that its action is not in any way caused by its past, it ends up with only $1000. My point is that AIXI ultimately suffers from the same problem; it assumes that it has this magical kind of free will when it actually does not, and also ends up with $1000.
Yep, that’s what the box being opaque means—the contents of the box have no causal effect on your perceptions.
Yeah, and then you kept stipulating that the model where Omega has read the action tape and then put or not put money into the box, but it didn’t leak onto sensory input, is very unlikely, and I noted that it’s stipulated in the problem statement that the box contents do not leak onto sensory input.
My point is that AIXI ultimately suffers from the same problem; it assumes that it has this magical kind of free will when it actually does not, and also ends up with $1000.
Let’s say AIXI lives inside the robot named Alice. According to every model employed by AIXI, the robot named Alice has pre-committed, since the beginning of time, to act out a specific sequence of actions. How the hell that assumes magical free will I don’t know. edit: and note that you can exclude machines which had read the action before printing matching sensory data, to actually ensure magical free will. I’m not even sure, maybe some variations by Hutter do just that.
edit:
Of course there can be predictors for AIXI. I can, for example, predict with a high degree of confidence that if AIXI knows what chess is and it wants to beat me at chess, it’s going to beat me. Also, if AIXI wants to maximise paperclips, I can easily predict that there are going to be a lot of paperclips.
That’s just abstruse. We both know what I mean.
By being the kind of person who one-boxes, I end up with a million dollars instead of a thousand.
Well, you’re just pre-committed to 1-box, then. The omegas that don’t know you’re pre-committed to 1-box (e.g. don’t trust you, can’t read your pre-committments, etc) would put nothing there, though, which you might be motivated to think about if its e.g. 10 millions vs 1 million, or 2 millions vs 1 million. (I wonder if one boxing is dependent on inflation...)
edit: let’s say I am playing the omega, and you know I know this weird trick for predicting you on the cheap.… you can work out what’s in the first box, can’t you? If you want money and don’t care of proving omega wrong out of spite, I can simply put nothing in the first box, and count on you to figure that out. You might have committed to the situation with 1 million vs 1 thousand, but I doubt you committed to 1000000 vs 999999 . You say you one box, fine, you get nothing—a rare time Omega is wrong.
edit2: a way to actually do Newcomb’s in real life, by the way. Take poor but not completely stupid people, make it 1000 000 vs 999 999 , and you can be almost always right. You can also draw some really rich people who you believe don’t really care and would 1-box for fun, and put a million in the first box for those, and be almost always right about both types of the case.
Yeah, and then you kept stipulating that the model where Omega has read the action tape and then put or not put money into the box, but it didn’t leak onto sensory input, is very unlikely, and I noted that it’s stipulated in the problem statement that the box contents do not leak onto sensory input.
The two situations are quite different. Any complexity penalty for the non-leaking box has already been paid via AIXI’s observations of the box and the whole Newcomb’s setup; the opaqueness of the box just boils down to normal reality.
On the other hand, your “action bit” model in which Omega reads AIXI’s action tape is associated with a significant complexity penalty because of the privileged nature of the situation—why specifically Omega, and not anyone else? Why does Omega specifically access that one bit, and not one of the other bits?
The more physics-like and real AIXI’s Turing machines get, the more of a penalty will be associated with Turing machines that need to incorporate a special case for a specific event.
Let’s say AIXI lives inside the robot named Alice. According to every model employed by AIXI, the robot named Alice has pre-committed, since the beginning of time, to act out a specific sequence of actions. How the hell that assumes magical free will I don’t know.
edit: and note that you can exclude machines which had read the action before printing matching sensory data, to actually ensure magical free will. I’m not even sure, maybe some variations by Hutter do just that.
AIXI as defined by Hutter (not just some “variation”) has a foundational assumption that an action at time t cannot influence AIXI’s perceptions at times 1..t-1. This is entirely incompatible with a model of Alice where she has pre-commited since the beginning of time, because such an Alice would be able to discover her own pre-commitment before she took the action in question. AIXI, on the other hand, explicitly forbids world models where that can happen.
That’s just abstruse. We both know what I mean.
No, I don’t. My point is that although you can’t predict AIXI in the general case, there are still many cases where AIXI can be predicted with relative ease. My argument is still that Newcomb’s problem is one of those cases (and that AIXI two-boxes).
As for all of your scenarios with different Omegas or different amounts of money, obviously a major factor is how accurate I think Omega’s predictions are. If ze has only been wrong one time in a million, and this includes people who have been one-boxing as well, why should I spend much time thinking about the possibility that I could be the one time he gets it wrong?
Similarly, if you’re playing Omega and you don’t have a past history of correctly predicting one-boxing vs one-boxers, then yes, I two-box. However, that scenario isn’t Newcomb’s problem. For it to be Newcomb’s problem, Omega has to have a history of correctly predicting one-boxers as well as two-boxers.
Yep, that’s what the box being opaque means—the contents of the box have no causal effect on your perceptions.
Nope. Watching the video would contradict this principle as well, because you would still effectively be seeing the contents of the box.
What IS allowed by Newcomb’s problem, however, is coming to the conclusion that the contents of the box and your perceptions of Omega have a common cause in terms of how Omega functions or acts. You are then free to use that reasoning to work out what the contents of the box could be.
Your interpretation of Newcomb’s problem basically makes it incoherent. For example, let’s say I’m a CDT agent and I believe Omega predicted me correctly. Then, at the moment I make my decision to two-box, but before I actually see the contents of the opaque box, I already know that the opaque box is empty. Does this mean that the box is not “opaque”, by your reasoning?
If I don’t think Omega is able to predict me, then it’s not Newcomb’s problem, is it? Even if we assume that the Omega program is capable of predicting humans, DNA is not that likely to be sufficient evidence for it to be able to make good predictions.
Well, then it obviously depends on what that exact logic is.
First of all, as I said previously, if AIXI doesn’t want the money then the scenario is not Newcomb’s. Also, I don’t think the a_5 reading program will end up being the simplest explanation even in that scenario. The program would need to use something like a_5, a_67, a_166, a_190 and a_222 in each instance of Newcomb’s problem respectively. Rather than a world program with a generic “get inputs from AIXI” subroutine, you need a world program with a “recognize Newcomblike problems and use the appropriate bits” subroutine; there is still a complexity penalty.
Unless you’re trying to make a setup in which Omega necessarily works by magic, then given sufficient evidence of reality at large magic is always going to be penalised. Given that reality at large works in a non-magical way, explanations that bootstrap your model of reality at large are always going to be simpler than explanations that have to add extraneous elements of “magic” to the model.
Besides, if Omega is just plain magical, then Newcomb’s problem boils down to “is a million bigger than a thousand?”
Of course there can be predictors for AIXI. I can, for example, predict with a high degree of confidence that if AIXI knows what chess is and it wants to beat me at chess, it’s going to beat me. Also, if AIXI wants to maximise paperclips, I can easily predict that there are going to be a lot of paperclips.
By being the kind of person who one-boxes, I end up with a million dollars instead of a thousand.
Um, the “libertarian free will” perspective is mostly what I’m arguing against here. The whole problem with CDT is that it takes that perspective, and, in concluding that its action is not in any way caused by its past, it ends up with only $1000. My point is that AIXI ultimately suffers from the same problem; it assumes that it has this magical kind of free will when it actually does not, and also ends up with $1000.
Yeah, and then you kept stipulating that the model where Omega has read the action tape and then put or not put money into the box, but it didn’t leak onto sensory input, is very unlikely, and I noted that it’s stipulated in the problem statement that the box contents do not leak onto sensory input.
Let’s say AIXI lives inside the robot named Alice. According to every model employed by AIXI, the robot named Alice has pre-committed, since the beginning of time, to act out a specific sequence of actions. How the hell that assumes magical free will I don’t know. edit: and note that you can exclude machines which had read the action before printing matching sensory data, to actually ensure magical free will. I’m not even sure, maybe some variations by Hutter do just that.
edit:
That’s just abstruse. We both know what I mean.
Well, you’re just pre-committed to 1-box, then. The omegas that don’t know you’re pre-committed to 1-box (e.g. don’t trust you, can’t read your pre-committments, etc) would put nothing there, though, which you might be motivated to think about if its e.g. 10 millions vs 1 million, or 2 millions vs 1 million. (I wonder if one boxing is dependent on inflation...)
edit: let’s say I am playing the omega, and you know I know this weird trick for predicting you on the cheap.… you can work out what’s in the first box, can’t you? If you want money and don’t care of proving omega wrong out of spite, I can simply put nothing in the first box, and count on you to figure that out. You might have committed to the situation with 1 million vs 1 thousand, but I doubt you committed to 1000000 vs 999999 . You say you one box, fine, you get nothing—a rare time Omega is wrong.
edit2: a way to actually do Newcomb’s in real life, by the way. Take poor but not completely stupid people, make it 1000 000 vs 999 999 , and you can be almost always right. You can also draw some really rich people who you believe don’t really care and would 1-box for fun, and put a million in the first box for those, and be almost always right about both types of the case.
The two situations are quite different. Any complexity penalty for the non-leaking box has already been paid via AIXI’s observations of the box and the whole Newcomb’s setup; the opaqueness of the box just boils down to normal reality. On the other hand, your “action bit” model in which Omega reads AIXI’s action tape is associated with a significant complexity penalty because of the privileged nature of the situation—why specifically Omega, and not anyone else? Why does Omega specifically access that one bit, and not one of the other bits? The more physics-like and real AIXI’s Turing machines get, the more of a penalty will be associated with Turing machines that need to incorporate a special case for a specific event.
AIXI as defined by Hutter (not just some “variation”) has a foundational assumption that an action at time t cannot influence AIXI’s perceptions at times 1..t-1. This is entirely incompatible with a model of Alice where she has pre-commited since the beginning of time, because such an Alice would be able to discover her own pre-commitment before she took the action in question. AIXI, on the other hand, explicitly forbids world models where that can happen.
No, I don’t. My point is that although you can’t predict AIXI in the general case, there are still many cases where AIXI can be predicted with relative ease. My argument is still that Newcomb’s problem is one of those cases (and that AIXI two-boxes).
As for all of your scenarios with different Omegas or different amounts of money, obviously a major factor is how accurate I think Omega’s predictions are. If ze has only been wrong one time in a million, and this includes people who have been one-boxing as well, why should I spend much time thinking about the possibility that I could be the one time he gets it wrong?
Similarly, if you’re playing Omega and you don’t have a past history of correctly predicting one-boxing vs one-boxers, then yes, I two-box. However, that scenario isn’t Newcomb’s problem. For it to be Newcomb’s problem, Omega has to have a history of correctly predicting one-boxers as well as two-boxers.