Omega’s statement can be rephrased as “in all possible universes within the problem space, Prometheus thinks you will one-box”. The other universes have already been excluded from the problem by the time you make your decision. Now, in some (probably the vast majority) of those universes Prometheus will be right; in some of them he’ll be wrong; but conditioning on the known fact of his belief violates exactly the same anti-anthropic idea you were using earlier!
Then you’re solving some other problem, not this one. Part of the setup is that Prometheus believes you to be a one-boxer (or rather, guessed at some point in the past that your blueprint would produce one), and I’m not sure how you can think your way out of that unless you’re assuming exotica like Prometheus running simulations of you as part of his evaluation process—and that starts to shade away from decision theory and into applied theology.
ETA: I suppose it adds an additional wrinkle if you take into account Omega’s fallibility as well, but I don’t see how that could produce a one-box result. I assumed “wise and trustworthy” to mean “accurate”.
Part of the setup is that Prometheus believes you to be a one-boxer,
No. Nowhere does it say that. It only says:
Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus’s predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight.”
Not which was the case with me. Granted, Omega states that he created me, but I reject that you are allowed to draw conclusions from that, because among other things Omega telling a counterfactual me that Prometheus did not create me wrecks the setup so it can’t possibly add up to sanity.
You know, you’re right. I think I was thinking about this as orthogonal to Solomon’s problem, not the transparent-boxes variation of Newcomb’s problem, but the latter is actually the correct analogy.
Omega’s statement can be rephrased as “in all possible universes within the problem space, Prometheus thinks you will one-box”. The other universes have already been excluded from the problem by the time you make your decision. Now, in some (probably the vast majority) of those universes Prometheus will be right; in some of them he’ll be wrong; but conditioning on the known fact of his belief violates exactly the same anti-anthropic idea you were using earlier!
I’m explicitly not assuming that Prometheus believes I will one-box so I don’t understand what you are referring to here.
Then you’re solving some other problem, not this one. Part of the setup is that Prometheus believes you to be a one-boxer (or rather, guessed at some point in the past that your blueprint would produce one), and I’m not sure how you can think your way out of that unless you’re assuming exotica like Prometheus running simulations of you as part of his evaluation process—and that starts to shade away from decision theory and into applied theology.
ETA: I suppose it adds an additional wrinkle if you take into account Omega’s fallibility as well, but I don’t see how that could produce a one-box result. I assumed “wise and trustworthy” to mean “accurate”.
No. Nowhere does it say that. It only says:
Not which was the case with me. Granted, Omega states that he created me, but I reject that you are allowed to draw conclusions from that, because among other things Omega telling a counterfactual me that Prometheus did not create me wrecks the setup so it can’t possibly add up to sanity.
You know, you’re right. I think I was thinking about this as orthogonal to Solomon’s problem, not the transparent-boxes variation of Newcomb’s problem, but the latter is actually the correct analogy.