I’m kind of surprised at how complicated everyone is making this, because to me the Bayesian answer jumped out as soon as I finished reading your definition of the problem, even before the first “argument” between one and two boxers. And it’s about five sentences long:
Don’t choose an amount of money. Choose an expected amount of money—the dollar value multiplied by its probability. One-box gets you >(1,000,000*.99). Two-box gets you <(1,000*1+1,000,000*.01). One-box has superior expected returns. Probability theory doesn’t usually encounter situations in which your decision can affect the prior probabilities, but it’s no mystery what to do when that situation arises—the same thing as always, maximize that utility function.
Of course, while I can be proud of myself for spotting that right away, I can’t be too proud because I know I was helped a lot by the fact that my mind was in a “thinking about Eliezer Yudkowsky” mode already, a mode it’s not necessarily in by default and might not be when I am presented with a dilemma (unless I make a conscious effort to put it there, which I guess now I stand a better chance of doing). I was expecting for a Bayesian solution to the problem and spotted it even though it wasn’t even the point of the example. I’ve seen this problem before, after all, without the context of being brought up by you, and I certainly didn’t come up with that solution at the time.
I’m kind of surprised at how complicated everyone is making this, because to me the Bayesian answer jumped out as soon as I finished reading your definition of the problem, even before the first “argument” between one and two boxers. And it’s about five sentences long:
Don’t choose an amount of money. Choose an expected amount of money—the dollar value multiplied by its probability. One-box gets you >(1,000,000*.99). Two-box gets you <(1,000*1+1,000,000*.01). One-box has superior expected returns. Probability theory doesn’t usually encounter situations in which your decision can affect the prior probabilities, but it’s no mystery what to do when that situation arises—the same thing as always, maximize that utility function.
Of course, while I can be proud of myself for spotting that right away, I can’t be too proud because I know I was helped a lot by the fact that my mind was in a “thinking about Eliezer Yudkowsky” mode already, a mode it’s not necessarily in by default and might not be when I am presented with a dilemma (unless I make a conscious effort to put it there, which I guess now I stand a better chance of doing). I was expecting for a Bayesian solution to the problem and spotted it even though it wasn’t even the point of the example. I’ve seen this problem before, after all, without the context of being brought up by you, and I certainly didn’t come up with that solution at the time.