While I disagree that one-boxing still wins, I’m most interested in seeing the “no future peeking” and the actual Omega success rate being defined as givens. It’s important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).
That does indeed seem like the standard version of Newcomb’s. (Though I don’t understand your last sentence, assuming “non-negligible” does not mean 1⁄2 to the power of 100.)
Can you spell out what you mean by “if” in this context? Because a lot of us are explicitly talking about the best algorithm to program into an AI.
I’m not sure I understand correctly, but let me phrase the question differently: what sort of confidence do we have in “99.9%” being an accurate value for Omega’s success rate?
From your previous comment I gather the confidence is absolute. This removes one complication while leaving the core of the paradox intact. I’m just pointing out that this isn’t very clear in the original specification of the paradox, and that clearing it up is useful.
To explain why it’s important, let me indeed think of an AI like hairyfigment suggested. Suppose someone says they have let 100 previous AIs flip a fair coin 100 times each and it came out heads every single time, because they have magic powers that make it so. This someone presents me video evidence of this feat.
If faced with this in the real world, an AI coded by me would still bet close to 50% on tails if offered to flip its own fair coin against this person, because I have strong evidence that this someone is a cheat, and their video evidence is fake. Just something I know from a huge amount of background information that was not explicitly part of this scenario.
However, when discussing such scenarios, it is sometimes useful to assume hypothetical scenarios unlike the real world. For example, we could state that this someone has actually performed the feat, and that there is absolutely no doubt about that. That’s impossible in our real world, but it’s useful for the sake of discussing bayesianism. Surely any bayesianist’s AI would expect heads with high probability in this hypothetical universe.
So, are we looking at “Omega in the real world where someone I don’t even know tells me they are really damn good at predicting the future”, or “Omega in some hypothetical world where they are actually known with absolute certainty to be really good at predicting the future”?
Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.
Seems to me the language of this rules out faked video. And to explain it as a newsletter scam would, I think, require postulating 2^100 civilizations that have contact with Omega but not each other. Note that we already have some reason to believe that a powerful and rational observer could predict our actions early on.
I’ve reviewed the language of the original statement and it seems that the puzzle is set in essentially the real world with two major givens, i.e. facts in which you have 100% confidence.
Given #1: Omega was correct on the last 100 occurrences.
Given #2: Box B is already empty or already full.
There is no leeway left for quantum effects, or for your choice affecting in any way what’s in box B. You cannot make box B full by consciously choosing to one-box. The puzzle says so, after all.
If you read it like this, then I don’t see why you would possibly one-box. Given #2 already implies the solution. 100 successful predictions must have been achieved through a very low probability event, or a trick, e.g by offering the bet only to those people whose answer you can already predict, e.g. by reading their LessWrong posts.
If you don’t read it like this, then we’re back to the “gooey vagueness” problem, and I will once again insist that the puzzle needs to be fully defined before it can be attempted. For example, by removing both givens, and instead specifying exactly what you know about those past 100 occurrences. Were they definitely not done on plants? Was there sampling bias? Am I considering this puzzle as an outside observer, or am I imagining myself being part of that universe—in the latter case I have to put some doubt into everything, as I can be hallucinating. These things matter.
With such clarifications, the puzzle becomes a matter of your confidence in the past statistics vs. your confidence about the laws of physics precluding your choice from actually influencing what’s in box B.
Assume no “future peeking” and Omega only correctly predicting people as difficult to predict as you with 99.9% probability. One-boxing still wins.
While I disagree that one-boxing still wins, I’m most interested in seeing the “no future peeking” and the actual Omega success rate being defined as givens. It’s important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).
That does indeed seem like the standard version of Newcomb’s. (Though I don’t understand your last sentence, assuming “non-negligible” does not mean 1⁄2 to the power of 100.)
Can you spell out what you mean by “if” in this context? Because a lot of us are explicitly talking about the best algorithm to program into an AI.
Why is it important to you that the success rate be a frequentialist probability rather than just a bayesian one?
I’m not sure I understand correctly, but let me phrase the question differently: what sort of confidence do we have in “99.9%” being an accurate value for Omega’s success rate?
From your previous comment I gather the confidence is absolute. This removes one complication while leaving the core of the paradox intact. I’m just pointing out that this isn’t very clear in the original specification of the paradox, and that clearing it up is useful.
To explain why it’s important, let me indeed think of an AI like hairyfigment suggested. Suppose someone says they have let 100 previous AIs flip a fair coin 100 times each and it came out heads every single time, because they have magic powers that make it so. This someone presents me video evidence of this feat.
If faced with this in the real world, an AI coded by me would still bet close to 50% on tails if offered to flip its own fair coin against this person, because I have strong evidence that this someone is a cheat, and their video evidence is fake. Just something I know from a huge amount of background information that was not explicitly part of this scenario.
However, when discussing such scenarios, it is sometimes useful to assume hypothetical scenarios unlike the real world. For example, we could state that this someone has actually performed the feat, and that there is absolutely no doubt about that. That’s impossible in our real world, but it’s useful for the sake of discussing bayesianism. Surely any bayesianist’s AI would expect heads with high probability in this hypothetical universe.
So, are we looking at “Omega in the real world where someone I don’t even know tells me they are really damn good at predicting the future”, or “Omega in some hypothetical world where they are actually known with absolute certainty to be really good at predicting the future”?
Seems to me the language of this rules out faked video. And to explain it as a newsletter scam would, I think, require postulating 2^100 civilizations that have contact with Omega but not each other. Note that we already have some reason to believe that a powerful and rational observer could predict our actions early on.
So you tell me what we should expect here.
I’ve reviewed the language of the original statement and it seems that the puzzle is set in essentially the real world with two major givens, i.e. facts in which you have 100% confidence.
Given #1: Omega was correct on the last 100 occurrences.
Given #2: Box B is already empty or already full.
There is no leeway left for quantum effects, or for your choice affecting in any way what’s in box B. You cannot make box B full by consciously choosing to one-box. The puzzle says so, after all.
If you read it like this, then I don’t see why you would possibly one-box. Given #2 already implies the solution. 100 successful predictions must have been achieved through a very low probability event, or a trick, e.g by offering the bet only to those people whose answer you can already predict, e.g. by reading their LessWrong posts.
If you don’t read it like this, then we’re back to the “gooey vagueness” problem, and I will once again insist that the puzzle needs to be fully defined before it can be attempted. For example, by removing both givens, and instead specifying exactly what you know about those past 100 occurrences. Were they definitely not done on plants? Was there sampling bias? Am I considering this puzzle as an outside observer, or am I imagining myself being part of that universe—in the latter case I have to put some doubt into everything, as I can be hallucinating. These things matter.
With such clarifications, the puzzle becomes a matter of your confidence in the past statistics vs. your confidence about the laws of physics precluding your choice from actually influencing what’s in box B.