By that reasoning, you’d want to two-box on the version of Newcomb’s with transparent boxes. And yet the correct thing to do in that case is one-box.
It’s not. You should perhaps commit to one-boxing in advance, but what you do by such commitment is to increase the possibilities Omega will appear to you with two full boxes.
But if Omega has appeared to you and you’re still capable of two-boxing (e.g. if you’ve not self-modified as to be unable to two-box), then two-boxing is the correct thing to do in transparent Newcomb.
I guess that this is true if by ‘commit’ you mean to satisfy all of the requirements that Omega uses to predict your actions. For some variations of Newcomb’s problem (including all versions in which Omega is perfect), to do this is necessarily to pick one box, but if not, then yes, you should ‘commit’ to one-boxing and then pick both boxes.
But even so, this usage of ‘commit’ is rather stronger than I would normally use that word for. If I were Omega and I were playing Newcomb with you (but not my version which I designed to be analogous to Azathoth), then I wouldn’t fill Box B, and you would lose.
Well, here’s the paradox: strict one-boxers in transparent Newcomb argue that they must one-box always, even when the box is empty, and therefore the boxes will be full.
Not just that, they argue that they must one-box always, even when the box is empty, BECAUSE then the box will be full.
Is that actually committment, or is that just doublethink, ability to hold two contradictory ideas at the same time? How can you commit to taking a course of action (grabbing an empty box) in order to make that course of action (grabbing an empty box) impossible?
And yeah, I’m sure I’d lose at playing transparent Newcomb, but I’m not sure that anyone but a master of doublethink could win it.
I’m not sure that anyone but a master of doublethink could win it.
If I know that I’m going to play transparent Newcomb, and the only way to win at transparent Newcomb is to become a master of doublethink, then I want to become a master of doublethink.
Well, here’s the paradox: strict one-boxers in transparent Newcomb argue that they must one-box always, even when the box is empty, and therefore the boxes will be full.
No, they argue that they must one-box always, even when they think they see the box is empty.
The argument is that you can’t do the Bayesian update P(the box is empty | I see the box as empty) = 1, because Bayesian updating in general fails to “win” when there are other copies of you in the same world, or when others can do source-level predictions of you. Instead, you should use Updateless Decision Theory.
BTW, I don’t think UDT is applicable to most human decisions (or rather, it probably tells you to do the same things as standard decision theory), including things like voting or contributing to charity, or deciding whether to have children, because I think logical correlations between ordinary humans are probably pretty low. (That’s just an intuition though since I don’t know how to do the calculations.)
No, they argue that they must one-box always, even when they think they see the box is empty.
If we can’t trust our senses more than Omega’s predictive powers, then the “transparent” boxes are effectively opaque, and the problem becomes essentially normal Newcomb.
In an analogous version of transparent Newcomb, it would be better to two-box. That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predictor (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?
In that situation, it would be better if I pick both boxes.
ETA: This is rather imperfectly specified. See my response to wedifrid’s response for a more precise version.
That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predicter (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?
I choose one box.
(Note: Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth. If Azathoth I would two box—if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.)
And in that game, I choose two boxes. I get $1001000, and you get $1000000, so I win.
Don’t confuse this with other versions where you might not be invited to play at all. This is one game for all time. (If that’s not clear from my description of it, then that’s the fault of my description. It’s supposed to be as analogous as I can make it to the game with Azathoth.)
Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth.
No, I meant Azathoth, as in my first comment in this comment thread. I mean to challenge the final conclusion of the OP, not the introductory lead into it. With Prometheus, there are added considerations (such as whether you are Prometheus’s simulation).
If Azathoth I would two box—if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.
That is also a good answer to the final conclusion of the OP.
ETA: Let me try to specify more precisely the version of transparent Newcomb which I claim is analogous to the OP’s proposed trade with Azathoth. An imperfect predictor with a good track record (which I will call God, for kicks) presents everybody in the world with this game, once. God predicts that each person will one-box and accordingly fills Box B with $1000000 every time. You know all of this. What do you do?
This version is a bit odd because, with the possible exception of a few timeless decision theorists, it seems clear that almost everybody will pick both boxes, so whence comes God’s good track record? We can make this even more analogous by specifying that, instead of $1000, Box A contains a white elephant that most people consider to have negative utility, but which you and I (or whoever the OP is directed at) contrarianly value at a positive $1000. (This matches the fact that most people want to reproduce anyway, but the OP only presents a conundrum to those of us who don’t.) So God’s prediction that everybody will one-box is likely to be correct for most people, but not for reasons that apply to you and me. Now what do you do?
it seems clear that almost everybody will pick both boxes, so whence comes God’s good track record?
Indeed, in the problem you have specified, God seems to be an incompetent predictor. If there’s no competent predictor involved, it’s safe to two-box.
Setting aside the Azathoth problem for a moment, the transparent Newcomb’s problem I had in mind does involve a competent predictor. You would one-box in that situation, yes? Even though Omega has given you two full boxes?
As you and wedrifid agree, one can make arguments for not reproducing in HonoreDB’s original Azathoth problem; my point is simply that “I already know Azathoth’s prediction” is not a good argument.
my point is simply that “I already know Azathoth’s prediction” is not a good argument.
OK, I agree with that. What matters is not what I happen to know but that Azathoth’s one-boxing prediction (right or wrong) is guaranteed by the formulation of the problem itself.
That may be, but my analogy is supposed to be irrelevant to that: I’m just hypothesising that we value our own existence at 1000 times the utility of not breeding. (Which is not true for me, personally, but I pretend for purposes of the argument.)
PS: I edited my previous post while you were writing your response, which may or may not make a difference.
I don’t get it. I do exist. If I never reproduce, then Azathoth predicted incorrectly (which will hardly be the first time).
(I also agree with the response that the universe isn’t better off for having me in it, but that doesn’t matter, since it has me anyway.)
By that reasoning, you’d want to two-box on the version of Newcomb’s with transparent boxes. And yet the correct thing to do in that case is one-box.
It’s not. You should perhaps commit to one-boxing in advance, but what you do by such commitment is to increase the possibilities Omega will appear to you with two full boxes.
But if Omega has appeared to you and you’re still capable of two-boxing (e.g. if you’ve not self-modified as to be unable to two-box), then two-boxing is the correct thing to do in transparent Newcomb.
I guess that this is true if by ‘commit’ you mean to satisfy all of the requirements that Omega uses to predict your actions. For some variations of Newcomb’s problem (including all versions in which Omega is perfect), to do this is necessarily to pick one box, but if not, then yes, you should ‘commit’ to one-boxing and then pick both boxes.
But even so, this usage of ‘commit’ is rather stronger than I would normally use that word for. If I were Omega and I were playing Newcomb with you (but not my version which I designed to be analogous to Azathoth), then I wouldn’t fill Box B, and you would lose.
Well, here’s the paradox: strict one-boxers in transparent Newcomb argue that they must one-box always, even when the box is empty, and therefore the boxes will be full.
Not just that, they argue that they must one-box always, even when the box is empty, BECAUSE then the box will be full.
Is that actually committment, or is that just doublethink, ability to hold two contradictory ideas at the same time? How can you commit to taking a course of action (grabbing an empty box) in order to make that course of action (grabbing an empty box) impossible?
And yeah, I’m sure I’d lose at playing transparent Newcomb, but I’m not sure that anyone but a master of doublethink could win it.
If I know that I’m going to play transparent Newcomb, and the only way to win at transparent Newcomb is to become a master of doublethink, then I want to become a master of doublethink.
No, they argue that they must one-box always, even when they think they see the box is empty.
The argument is that you can’t do the Bayesian update P(the box is empty | I see the box as empty) = 1, because Bayesian updating in general fails to “win” when there are other copies of you in the same world, or when others can do source-level predictions of you. Instead, you should use Updateless Decision Theory.
BTW, I don’t think UDT is applicable to most human decisions (or rather, it probably tells you to do the same things as standard decision theory), including things like voting or contributing to charity, or deciding whether to have children, because I think logical correlations between ordinary humans are probably pretty low. (That’s just an intuition though since I don’t know how to do the calculations.)
If we can’t trust our senses more than Omega’s predictive powers, then the “transparent” boxes are effectively opaque, and the problem becomes essentially normal Newcomb.
Ordinary correlations between ordinary humans seem to be pretty high. Do they suffice for our needs? I’m not sure...
In an analogous version of transparent Newcomb, it would be better to two-box. That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predictor (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?
In that situation, it would be better if I pick both boxes.
ETA: This is rather imperfectly specified. See my response to wedifrid’s response for a more precise version.
I choose one box.
(Note: Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth. If Azathoth I would two box—if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.)
And in that game, I choose two boxes. I get $1001000, and you get $1000000, so I win.
Don’t confuse this with other versions where you might not be invited to play at all. This is one game for all time. (If that’s not clear from my description of it, then that’s the fault of my description. It’s supposed to be as analogous as I can make it to the game with Azathoth.)
No, I meant Azathoth, as in my first comment in this comment thread. I mean to challenge the final conclusion of the OP, not the introductory lead into it. With Prometheus, there are added considerations (such as whether you are Prometheus’s simulation).
That is also a good answer to the final conclusion of the OP.
ETA: Let me try to specify more precisely the version of transparent Newcomb which I claim is analogous to the OP’s proposed trade with Azathoth. An imperfect predictor with a good track record (which I will call God, for kicks) presents everybody in the world with this game, once. God predicts that each person will one-box and accordingly fills Box B with $1000000 every time. You know all of this. What do you do?
This version is a bit odd because, with the possible exception of a few timeless decision theorists, it seems clear that almost everybody will pick both boxes, so whence comes God’s good track record? We can make this even more analogous by specifying that, instead of $1000, Box A contains a white elephant that most people consider to have negative utility, but which you and I (or whoever the OP is directed at) contrarianly value at a positive $1000. (This matches the fact that most people want to reproduce anyway, but the OP only presents a conundrum to those of us who don’t.) So God’s prediction that everybody will one-box is likely to be correct for most people, but not for reasons that apply to you and me. Now what do you do?
Indeed, in the problem you have specified, God seems to be an incompetent predictor. If there’s no competent predictor involved, it’s safe to two-box.
Setting aside the Azathoth problem for a moment, the transparent Newcomb’s problem I had in mind does involve a competent predictor. You would one-box in that situation, yes? Even though Omega has given you two full boxes?
As you and wedrifid agree, one can make arguments for not reproducing in HonoreDB’s original Azathoth problem; my point is simply that “I already know Azathoth’s prediction” is not a good argument.
OK, I agree with that. What matters is not what I happen to know but that Azathoth’s one-boxing prediction (right or wrong) is guaranteed by the formulation of the problem itself.
I get the impression that we may decline to submit to Azathoth’s breeding ultimatum for somewhat different reasons.
That may be, but my analogy is supposed to be irrelevant to that: I’m just hypothesising that we value our own existence at 1000 times the utility of not breeding. (Which is not true for me, personally, but I pretend for purposes of the argument.)
PS: I edited my previous post while you were writing your response, which may or may not make a difference.