Yes that’s right, I regret calling it a problem instead of just a “scenario”.
As a follow up though,I would say that the standard Newcomb’s problem is (essentially) functionally equivalent to:
“Omega scans your brain. If it concludes that you would two-box in Newcomb’s Problem, it hands you at most $1,000 and flies off. If it concludes that you would one-box in Newcomb’s Problem, it hands you at least $1,000,000 and flies off.”
No, that doesn’t work. It seems to me you’ve confused yourself by constructing a fake symmetry between these problems. It wouldn’t make any sense for Omega to “predict” whether you choose both boxes in Newcomb’s if Newcomb’s were equivalent to something that doesn’t involve choosing boxes.
More explicitly:
Newcomb’s Problem is “You sit in front of a pair of boxes, which are either- both filled with money if Omega predicted you would take one box in this case, otherwise only one is filled”. Note: describing the problem does not require mentioning “Newcomb’s Problem”; it can be expressed as a simple game tree (see here for some explanation of the tree format):
.
In comparison, your “Inverse Newcomb” is “Omega gives you some money iff it predicts that you take both boxes in Newcomb’s Problem, an entirely different scenario (ie. not this case).”
The latter is more of the form “Omega arbitrarily rewards agents for taking certain hypothetical actions in a different problem” (of which a nearly limitless variety can be invented to justify any chosen decision theory¹), rather than being an actual self-contained problem which can be “solved”.
The latter also can’t be expressed as any kind of game tree without “cheating” and naming “Newcomb’s Problem” verbally—or rather, you can express a similar thing by embedding the Newcomb game tree and referring to the embedded tree, but that converts it into a legitimate decision problem, which FDT of course gives the correct answer to (TODO: draw an example ;).
(¹): Consider Inverse^2 Newcomb, which I consider the proper symmetric inverse of “Inverse Newcomb”: Omega puts you in front of two boxes and says “this is not Newcomb’s Problem, but I have filled both boxes with money iff I predicted that you take one box in standard Newcomb”. Obviously here FDT takes both boxes and a tidy $1,000,1000 profit (plus the $1,000,000 from Standard Newcomb). Whereas CDT gets… $1000 (plus $1000 from Standard Newcomb).
Yes that’s right, I regret calling it a problem instead of just a “scenario”.
As a follow up though,I would say that the standard Newcomb’s problem is (essentially) functionally equivalent to:
“Omega scans your brain. If it concludes that you would two-box in Newcomb’s Problem, it hands you at most $1,000 and flies off. If it concludes that you would one-box in Newcomb’s Problem, it hands you at least $1,000,000 and flies off.”
No, that doesn’t work. It seems to me you’ve confused yourself by constructing a fake symmetry between these problems. It wouldn’t make any sense for Omega to “predict” whether you choose both boxes in Newcomb’s if Newcomb’s were equivalent to something that doesn’t involve choosing boxes.
More explicitly:
Newcomb’s Problem is “You sit in front of a pair of boxes, which are either- both filled with money if Omega predicted you would take one box in this case, otherwise only one is filled”. Note: describing the problem does not require mentioning “Newcomb’s Problem”; it can be expressed as a simple game tree (see here for some explanation of the tree format):
.In comparison, your “Inverse Newcomb” is “Omega gives you some money iff it predicts that you take both boxes in Newcomb’s Problem, an entirely different scenario (ie. not this case).”
The latter is more of the form “Omega arbitrarily rewards agents for taking certain hypothetical actions in a different problem” (of which a nearly limitless variety can be invented to justify any chosen decision theory¹), rather than being an actual self-contained problem which can be “solved”.
The latter also can’t be expressed as any kind of game tree without “cheating” and naming “Newcomb’s Problem” verbally—or rather, you can express a similar thing by embedding the Newcomb game tree and referring to the embedded tree, but that converts it into a legitimate decision problem, which FDT of course gives the correct answer to (TODO: draw an example ;).
(¹): Consider Inverse^2 Newcomb, which I consider the proper symmetric inverse of “Inverse Newcomb”: Omega puts you in front of two boxes and says “this is not Newcomb’s Problem, but I have filled both boxes with money iff I predicted that you take one box in standard Newcomb”. Obviously here FDT takes both boxes and a tidy $1,000,1000 profit (plus the $1,000,000 from Standard Newcomb). Whereas CDT gets… $1000 (plus $1000 from Standard Newcomb).