on your view, are you capable of precommitting to one-box or two-box?
You have answered that “one can argue that” you are capable of it. Which, well, OK, that’s probably true. One could also argue that you aren’t, I imagine.
So… on your view, are you capable of precommitting?
Because earlier you seemed to be saying that you weren’t able to. I think you’re now saying that you can (but that other people can’t). But it’s very hard to tell.
I can’t tell whether you’re just being slippery as a rhetorical strategy, or whether I’ve actually misunderstood you.
That aside: it’s not actually clear to me that precommitting to oneboxing is necessary. The predictor doesn’t require me to precommit to oneboxing, merely to have some set of properties that results in me oneboxing. Precommitment is a simple example of such a property, but hardly the only possible one.
See, that’s where I disagree. If you choose to one-box, even if that choice is made on a whim right before you’re required to select a box/boxes, Omega can predict that choice with accuracy. This isn’t backward causation; it’s simply what happens when you have a very good predictor. The problem with causal decision theory is that it neglects these sorts of acausal logical connections, instead electing to only keep track of casual connections. If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information. If you take a random passerby and present them with a formulation of Newcomb’s Problem, Omega can analyze that passerby’s disposition and predict in advance how that passerby’s disposition will affect his/her reaction to that particular formulation of Newcomb’s problem, including whether he/she will two-box or one-box. Conscious precommitment is not required; the only requirement is that you make a choice. If you or any other person chooses to one-box, regardless of whether they’ve previously heard of Newcomb’s Problem or made a precommitment, Omega will predict that decision with whatever accuracy we specify. Then the only questions are “How high of an accuracy do we need?”, followed by “Can humans reach this desired level of accuracy?” And while I’m hesitant to provide an absolute threshold for the first question, I do not hesitate at all to answer the second question with, “Yes, absolutely.” Thus we see that Newcomb-like situations can and do pop up in real life, with merely human predictors.
If there are any particulars you disagree with in the above explanation, please let me know.
If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information.
Sure, I agree, Omega can do that.
However when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction. Regardless of what his prediction was, the optimal choice for me after Stage 1 is to two-box.
My choice cannot change what’s in the boxes—only Omega can determine what’s in the boxes and I have no choice with respect to his prediction.
Well, if you reason that way, you will end up two-boxing. And, of course, Omega will know that you will end up two-boxing. Therefore, he will put nothing in Box B. If, on the other hand, you had chosen to one-box instead, Omega would have known that, too. And he would have put $1000000 in Box B. If you say, “Oh, the contents of the boxes are already fixed, so I’m gonna two-box!”, there is not going to be anything in Box B. It doesn’t matter what reasoning you use to justify two-boxing, or how elaborate your argument is; if you end up two-boxing, you are going to get $1000 with probability (Omega’s-predictive-power)%. Sure, you can say, “The boxes are already filled,” but guess what? If you do that, you’re not going to get any money. (Well, I mean, you’ll get $1000, but you could have gotten $1000000.) Remember, the goal of a rationalist is to win. If you want to win, you will one-box. Period.
You chose to two-box in this hypothetical Newcomb’s Problem when you said earlier in this thread that you would two-box. Fortunately, since this is a hypothetical, you don’t actually gain or lose any utility from answering as you did, but had this been a real-life Newcomb-like situation, you would have. If (I’m actually tempted to say “when”, but that discussion can be held another time) you ever encounter a real-life Newcomb-like situation, I strongly recommend you one-box (or whatever the equivalent of one-boxing is in that situation).
I don’t believe real-life Newcomb situations exist or will exist in my future.
I also think that the local usage of “Newcomb-like” is misleading in that it is used to refer to situations which don’t have much to do with the classic Newcomb’s Problem.
I strongly recommend you one-box
You recommendation was considered and rejected :-)
I don’t believe real-life Newcomb situations exist or will exist in my future.
It is my understanding that Newcomb-like situations arise whenever you deal with agents who possess predictive capabilities greater than chance. It appears, however, that you do not agree with this statement. If it’s not too inconvenient, could you explain why?
when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction.
You have elsewhere agreed that you (though not everyone) have the ability to make choices that affect Omega’s prediction (including, but not limited to, the choice of whether or not to precommit to one-boxing).
That seems incompatible with your claim that all of your relevant choices are made after Omega’s prediction.
Have you changed your mind? Have I misunderstood you? Are you making inconsistent claims in different branches of this conversation? Do you not see an inconsistency? Other?
Ah. OK. And just to be clear: you believe that advance warning is necessary in order to decide whether to one-box or two-box… it simply isn’t possible, in the absence of advance warning, to make that choice; rather, in the absence of advance warning humans deterministically two-box. Have I understood that correctly?
it simply isn’t possible, in the absence of advance warning, to make that choice
Correct.
in the absence of advance warning humans deterministically two-box
Nope. I think two-boxing is the right thing to do but humans are not deterministic, they can (and do) all kinds of stuff. If you run an empirical test I think it’s very likely that some people will two-box and some people will one-box.
Gotcha: they don’t have a choice in which they do, on your account, but they might do one or the other. Correction accepted.
Incidentally, for the folks downvoting Lumifer here, I’m curious as to your reasons. I’ve found many of their earlier comments annoyingly evasive, but now they’re actually answering questions clearly. I disagree with those answers, but that’s another question altogether.
In which way am I not accountable? I am here, answering questions, not deleting my posts.
Sure, I often prefer to point to something rather than plop down a full specification. I am also rather fond of irony and sarcasm. But that’s not exactly the same thing as avoiding accountability, is it?
If you want highly specific answers, ask highly specific questions. If you feel there is ambiguity in the subject, resolve it in the question.
So, I asked:
You have answered that “one can argue that” you are capable of it.
Which, well, OK, that’s probably true.
One could also argue that you aren’t, I imagine.
So… on your view, are you capable of precommitting?
Because earlier you seemed to be saying that you weren’t able to.
I think you’re now saying that you can (but that other people can’t).
But it’s very hard to tell.
I can’t tell whether you’re just being slippery as a rhetorical strategy, or whether I’ve actually misunderstood you.
That aside: it’s not actually clear to me that precommitting to oneboxing is necessary. The predictor doesn’t require me to precommit to oneboxing, merely to have some set of properties that results in me oneboxing. Precommitment is a simple example of such a property, but hardly the only possible one.
I can precommit, but I don’t want to. Other people (in the general case) cannot precommit because they have no idea about the Newcomb’s Problem.
Sure, but that has nothing to do with my choices.
See, that’s where I disagree. If you choose to one-box, even if that choice is made on a whim right before you’re required to select a box/boxes, Omega can predict that choice with accuracy. This isn’t backward causation; it’s simply what happens when you have a very good predictor. The problem with causal decision theory is that it neglects these sorts of acausal logical connections, instead electing to only keep track of casual connections. If Omega can predict you with high-enough accuracy, he can predict choices that you would make given certain information. If you take a random passerby and present them with a formulation of Newcomb’s Problem, Omega can analyze that passerby’s disposition and predict in advance how that passerby’s disposition will affect his/her reaction to that particular formulation of Newcomb’s problem, including whether he/she will two-box or one-box. Conscious precommitment is not required; the only requirement is that you make a choice. If you or any other person chooses to one-box, regardless of whether they’ve previously heard of Newcomb’s Problem or made a precommitment, Omega will predict that decision with whatever accuracy we specify. Then the only questions are “How high of an accuracy do we need?”, followed by “Can humans reach this desired level of accuracy?” And while I’m hesitant to provide an absolute threshold for the first question, I do not hesitate at all to answer the second question with, “Yes, absolutely.” Thus we see that Newcomb-like situations can and do pop up in real life, with merely human predictors.
If there are any particulars you disagree with in the above explanation, please let me know.
Sure, I agree, Omega can do that.
However when I get to move, when I have the opportunity to make a choice, Omega is already done with his prediction. Regardless of what his prediction was, the optimal choice for me after Stage 1 is to two-box.
My choice cannot change what’s in the boxes—only Omega can determine what’s in the boxes and I have no choice with respect to his prediction.
Well, if you reason that way, you will end up two-boxing. And, of course, Omega will know that you will end up two-boxing. Therefore, he will put nothing in Box B. If, on the other hand, you had chosen to one-box instead, Omega would have known that, too. And he would have put $1000000 in Box B. If you say, “Oh, the contents of the boxes are already fixed, so I’m gonna two-box!”, there is not going to be anything in Box B. It doesn’t matter what reasoning you use to justify two-boxing, or how elaborate your argument is; if you end up two-boxing, you are going to get $1000 with probability (Omega’s-predictive-power)%. Sure, you can say, “The boxes are already filled,” but guess what? If you do that, you’re not going to get any money. (Well, I mean, you’ll get $1000, but you could have gotten $1000000.) Remember, the goal of a rationalist is to win. If you want to win, you will one-box. Period.
Notice the tense you are using: “had chosen”. When did that choice happen? (for a standard participant)
You chose to two-box in this hypothetical Newcomb’s Problem when you said earlier in this thread that you would two-box. Fortunately, since this is a hypothetical, you don’t actually gain or lose any utility from answering as you did, but had this been a real-life Newcomb-like situation, you would have. If (I’m actually tempted to say “when”, but that discussion can be held another time) you ever encounter a real-life Newcomb-like situation, I strongly recommend you one-box (or whatever the equivalent of one-boxing is in that situation).
I don’t believe real-life Newcomb situations exist or will exist in my future.
I also think that the local usage of “Newcomb-like” is misleading in that it is used to refer to situations which don’t have much to do with the classic Newcomb’s Problem.
You recommendation was considered and rejected :-)
It is my understanding that Newcomb-like situations arise whenever you deal with agents who possess predictive capabilities greater than chance. It appears, however, that you do not agree with this statement. If it’s not too inconvenient, could you explain why?
Can you define what is a “Newcomb-like” situation and how can I distinguish such from a non-Newcomb-like one?
You have elsewhere agreed that you (though not everyone) have the ability to make choices that affect Omega’s prediction (including, but not limited to, the choice of whether or not to precommit to one-boxing).
That seems incompatible with your claim that all of your relevant choices are made after Omega’s prediction.
Have you changed your mind? Have I misunderstood you? Are you making inconsistent claims in different branches of this conversation? Do you not see an inconsistency? Other?
Here when I say “I” I mean “a standard participant in the classic Newcomb’s Problem”. A standard participant has no advance warning.
Ah. OK. And just to be clear: you believe that advance warning is necessary in order to decide whether to one-box or two-box… it simply isn’t possible, in the absence of advance warning, to make that choice; rather, in the absence of advance warning humans deterministically two-box. Have I understood that correctly?
Correct.
Nope. I think two-boxing is the right thing to do but humans are not deterministic, they can (and do) all kinds of stuff. If you run an empirical test I think it’s very likely that some people will two-box and some people will one-box.
Gotcha: they don’t have a choice in which they do, on your account, but they might do one or the other. Correction accepted.
Incidentally, for the folks downvoting Lumifer here, I’m curious as to your reasons. I’ve found many of their earlier comments annoyingly evasive, but now they’re actually answering questions clearly. I disagree with those answers, but that’s another question altogether.
There are a lot of behaviorists here. If someone doesn’t see the light, apply electric prods until she does X-)
It would greatly surprise me if anyone here believed that downvoting you will influence your behavior in any positive way.
You think it’s just mood affiliation, on a rationalist forum? INCONCEIVABLE! :-D
I’m curious: do you actually believe I think that, or are you saying it for some other reason?
Either way: why?
A significant part of the time I operate in the ha-ha only serious mode :-)
The grandparent post is a reference to a quote from Princess Bride.
Yes, you do, and I understand the advantages of that mode in terms of being able to say stuff without being held accountable for it.
I find it annoying.
That said, you are of course under no obligation to answer any of my questions.
In which way am I not accountable? I am here, answering questions, not deleting my posts.
Sure, I often prefer to point to something rather than plop down a full specification. I am also rather fond of irony and sarcasm. But that’s not exactly the same thing as avoiding accountability, is it?
If you want highly specific answers, ask highly specific questions. If you feel there is ambiguity in the subject, resolve it in the question.
OK. Thanks for clarifying your position.