In an analogous version of transparent Newcomb, it would be better to two-box. That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predictor (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?
In that situation, it would be better if I pick both boxes.
ETA: This is rather imperfectly specified. See my response to wedifrid’s response for a more precise version.
That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predicter (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?
I choose one box.
(Note: Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth. If Azathoth I would two box—if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.)
And in that game, I choose two boxes. I get $1001000, and you get $1000000, so I win.
Don’t confuse this with other versions where you might not be invited to play at all. This is one game for all time. (If that’s not clear from my description of it, then that’s the fault of my description. It’s supposed to be as analogous as I can make it to the game with Azathoth.)
Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth.
No, I meant Azathoth, as in my first comment in this comment thread. I mean to challenge the final conclusion of the OP, not the introductory lead into it. With Prometheus, there are added considerations (such as whether you are Prometheus’s simulation).
If Azathoth I would two box—if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.
That is also a good answer to the final conclusion of the OP.
ETA: Let me try to specify more precisely the version of transparent Newcomb which I claim is analogous to the OP’s proposed trade with Azathoth. An imperfect predictor with a good track record (which I will call God, for kicks) presents everybody in the world with this game, once. God predicts that each person will one-box and accordingly fills Box B with $1000000 every time. You know all of this. What do you do?
This version is a bit odd because, with the possible exception of a few timeless decision theorists, it seems clear that almost everybody will pick both boxes, so whence comes God’s good track record? We can make this even more analogous by specifying that, instead of $1000, Box A contains a white elephant that most people consider to have negative utility, but which you and I (or whoever the OP is directed at) contrarianly value at a positive $1000. (This matches the fact that most people want to reproduce anyway, but the OP only presents a conundrum to those of us who don’t.) So God’s prediction that everybody will one-box is likely to be correct for most people, but not for reasons that apply to you and me. Now what do you do?
it seems clear that almost everybody will pick both boxes, so whence comes God’s good track record?
Indeed, in the problem you have specified, God seems to be an incompetent predictor. If there’s no competent predictor involved, it’s safe to two-box.
Setting aside the Azathoth problem for a moment, the transparent Newcomb’s problem I had in mind does involve a competent predictor. You would one-box in that situation, yes? Even though Omega has given you two full boxes?
As you and wedrifid agree, one can make arguments for not reproducing in HonoreDB’s original Azathoth problem; my point is simply that “I already know Azathoth’s prediction” is not a good argument.
my point is simply that “I already know Azathoth’s prediction” is not a good argument.
OK, I agree with that. What matters is not what I happen to know but that Azathoth’s one-boxing prediction (right or wrong) is guaranteed by the formulation of the problem itself.
That may be, but my analogy is supposed to be irrelevant to that: I’m just hypothesising that we value our own existence at 1000 times the utility of not breeding. (Which is not true for me, personally, but I pretend for purposes of the argument.)
PS: I edited my previous post while you were writing your response, which may or may not make a difference.
In an analogous version of transparent Newcomb, it would be better to two-box. That version goes like this: You have two boxes in front of you. Box A contains $1000 and Box B contains $1000000. You must take Box B but have the choice of taking Box A or not. A very good predictor (ETA: the imperfect Azathoth, not the perfect Omega) put the money in Box B because it predicted that you would choose not to take Box A. The game will not be played again. What do you choose?
In that situation, it would be better if I pick both boxes.
ETA: This is rather imperfectly specified. See my response to wedifrid’s response for a more precise version.
I choose one box.
(Note: Using the terminology from the post it sounded like you meant assume you meant Prometheus, not Azathoth. If Azathoth I would two box—if modelled as a predictor at all he is an incredibly biased predictor that can be exploited.)
And in that game, I choose two boxes. I get $1001000, and you get $1000000, so I win.
Don’t confuse this with other versions where you might not be invited to play at all. This is one game for all time. (If that’s not clear from my description of it, then that’s the fault of my description. It’s supposed to be as analogous as I can make it to the game with Azathoth.)
No, I meant Azathoth, as in my first comment in this comment thread. I mean to challenge the final conclusion of the OP, not the introductory lead into it. With Prometheus, there are added considerations (such as whether you are Prometheus’s simulation).
That is also a good answer to the final conclusion of the OP.
ETA: Let me try to specify more precisely the version of transparent Newcomb which I claim is analogous to the OP’s proposed trade with Azathoth. An imperfect predictor with a good track record (which I will call God, for kicks) presents everybody in the world with this game, once. God predicts that each person will one-box and accordingly fills Box B with $1000000 every time. You know all of this. What do you do?
This version is a bit odd because, with the possible exception of a few timeless decision theorists, it seems clear that almost everybody will pick both boxes, so whence comes God’s good track record? We can make this even more analogous by specifying that, instead of $1000, Box A contains a white elephant that most people consider to have negative utility, but which you and I (or whoever the OP is directed at) contrarianly value at a positive $1000. (This matches the fact that most people want to reproduce anyway, but the OP only presents a conundrum to those of us who don’t.) So God’s prediction that everybody will one-box is likely to be correct for most people, but not for reasons that apply to you and me. Now what do you do?
Indeed, in the problem you have specified, God seems to be an incompetent predictor. If there’s no competent predictor involved, it’s safe to two-box.
Setting aside the Azathoth problem for a moment, the transparent Newcomb’s problem I had in mind does involve a competent predictor. You would one-box in that situation, yes? Even though Omega has given you two full boxes?
As you and wedrifid agree, one can make arguments for not reproducing in HonoreDB’s original Azathoth problem; my point is simply that “I already know Azathoth’s prediction” is not a good argument.
OK, I agree with that. What matters is not what I happen to know but that Azathoth’s one-boxing prediction (right or wrong) is guaranteed by the formulation of the problem itself.
I get the impression that we may decline to submit to Azathoth’s breeding ultimatum for somewhat different reasons.
That may be, but my analogy is supposed to be irrelevant to that: I’m just hypothesising that we value our own existence at 1000 times the utility of not breeding. (Which is not true for me, personally, but I pretend for purposes of the argument.)
PS: I edited my previous post while you were writing your response, which may or may not make a difference.