It’s only controversial because it’s dressed up in wooey vagueness
I also happen to think that under-specification of this puzzle adds significantly to the controversy.
What the puzzle doesn’t tell us is the properties of the universe in which it is set. Namely, whether the universe permits future to influence the past, which I’ll refer to as “future peeking”.
(alternatively, whether the universe somehow allows someone within the universe to precisely simulate the future faster than it actually comes—a proposition I don’t believe is ever true in any universe defined mathematically).
This is important because if the future can’t influence the past, then it is known with absolute certainty that taking two boxes won’t possibly change what’s in them (this is, after all, a basic given of the universe). Whether Omega has predicted something before is completely irrelevant now that the boxes are placed.
Alas, we aren’t told what the universe is like. If that is intentionally part of the puzzle then the only way to solve it would be to enumerate all possible universes, assigning each one a probability of being ours based on all the available evidence, and essentially come up with a probability that “future peeking” is impossible in our universe. One would then apply simple arithmetic to calculate the expected winnings.
Unfortunately P(“future peeking allowed”) it’s one of those probabilities that is completely incalculable for any practical purpose. Thus if “no future peeking” isn’t a given, the best answer is “I don’t know if taking two boxes is best because there’s this one probability I can’t actually calculate in practice”.
whether the universe somehow allows someone within the universe to precisely simulate the future faster than it actually comes—a proposition I don’t believe is ever true in any universe defined mathematically
As near as I can tell, this depends on dubious assumptions about a mathematical universe. You appear to treat time as fundamental, and yet reject the possibility that reality (or the Matrix) simulates a certain outcome happening at a certain time, not before (as we’d expect if reality calculated the output of a time-dependent wavefunction).
In addition, you seem to assume that reality cares about the same aspects of the situation that interest Omega. Otherwise it seems clear that Omega could get an answer sooner by leaving out all the details which don’t affect the human-level outcome.
While I disagree that one-boxing still wins, I’m most interested in seeing the “no future peeking” and the actual Omega success rate being defined as givens. It’s important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).
That does indeed seem like the standard version of Newcomb’s. (Though I don’t understand your last sentence, assuming “non-negligible” does not mean 1⁄2 to the power of 100.)
Can you spell out what you mean by “if” in this context? Because a lot of us are explicitly talking about the best algorithm to program into an AI.
I’m not sure I understand correctly, but let me phrase the question differently: what sort of confidence do we have in “99.9%” being an accurate value for Omega’s success rate?
From your previous comment I gather the confidence is absolute. This removes one complication while leaving the core of the paradox intact. I’m just pointing out that this isn’t very clear in the original specification of the paradox, and that clearing it up is useful.
To explain why it’s important, let me indeed think of an AI like hairyfigment suggested. Suppose someone says they have let 100 previous AIs flip a fair coin 100 times each and it came out heads every single time, because they have magic powers that make it so. This someone presents me video evidence of this feat.
If faced with this in the real world, an AI coded by me would still bet close to 50% on tails if offered to flip its own fair coin against this person, because I have strong evidence that this someone is a cheat, and their video evidence is fake. Just something I know from a huge amount of background information that was not explicitly part of this scenario.
However, when discussing such scenarios, it is sometimes useful to assume hypothetical scenarios unlike the real world. For example, we could state that this someone has actually performed the feat, and that there is absolutely no doubt about that. That’s impossible in our real world, but it’s useful for the sake of discussing bayesianism. Surely any bayesianist’s AI would expect heads with high probability in this hypothetical universe.
So, are we looking at “Omega in the real world where someone I don’t even know tells me they are really damn good at predicting the future”, or “Omega in some hypothetical world where they are actually known with absolute certainty to be really good at predicting the future”?
Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.
Seems to me the language of this rules out faked video. And to explain it as a newsletter scam would, I think, require postulating 2^100 civilizations that have contact with Omega but not each other. Note that we already have some reason to believe that a powerful and rational observer could predict our actions early on.
I’ve reviewed the language of the original statement and it seems that the puzzle is set in essentially the real world with two major givens, i.e. facts in which you have 100% confidence.
Given #1: Omega was correct on the last 100 occurrences.
Given #2: Box B is already empty or already full.
There is no leeway left for quantum effects, or for your choice affecting in any way what’s in box B. You cannot make box B full by consciously choosing to one-box. The puzzle says so, after all.
If you read it like this, then I don’t see why you would possibly one-box. Given #2 already implies the solution. 100 successful predictions must have been achieved through a very low probability event, or a trick, e.g by offering the bet only to those people whose answer you can already predict, e.g. by reading their LessWrong posts.
If you don’t read it like this, then we’re back to the “gooey vagueness” problem, and I will once again insist that the puzzle needs to be fully defined before it can be attempted. For example, by removing both givens, and instead specifying exactly what you know about those past 100 occurrences. Were they definitely not done on plants? Was there sampling bias? Am I considering this puzzle as an outside observer, or am I imagining myself being part of that universe—in the latter case I have to put some doubt into everything, as I can be hallucinating. These things matter.
With such clarifications, the puzzle becomes a matter of your confidence in the past statistics vs. your confidence about the laws of physics precluding your choice from actually influencing what’s in box B.
I also happen to think that under-specification of this puzzle adds significantly to the controversy.
What the puzzle doesn’t tell us is the properties of the universe in which it is set. Namely, whether the universe permits future to influence the past, which I’ll refer to as “future peeking”.
(alternatively, whether the universe somehow allows someone within the universe to precisely simulate the future faster than it actually comes—a proposition I don’t believe is ever true in any universe defined mathematically).
This is important because if the future can’t influence the past, then it is known with absolute certainty that taking two boxes won’t possibly change what’s in them (this is, after all, a basic given of the universe). Whether Omega has predicted something before is completely irrelevant now that the boxes are placed.
Alas, we aren’t told what the universe is like. If that is intentionally part of the puzzle then the only way to solve it would be to enumerate all possible universes, assigning each one a probability of being ours based on all the available evidence, and essentially come up with a probability that “future peeking” is impossible in our universe. One would then apply simple arithmetic to calculate the expected winnings.
Unfortunately P(“future peeking allowed”) it’s one of those probabilities that is completely incalculable for any practical purpose. Thus if “no future peeking” isn’t a given, the best answer is “I don’t know if taking two boxes is best because there’s this one probability I can’t actually calculate in practice”.
As near as I can tell, this depends on dubious assumptions about a mathematical universe. You appear to treat time as fundamental, and yet reject the possibility that reality (or the Matrix) simulates a certain outcome happening at a certain time, not before (as we’d expect if reality calculated the output of a time-dependent wavefunction).
In addition, you seem to assume that reality cares about the same aspects of the situation that interest Omega. Otherwise it seems clear that Omega could get an answer sooner by leaving out all the details which don’t affect the human-level outcome.
Assume no “future peeking” and Omega only correctly predicting people as difficult to predict as you with 99.9% probability. One-boxing still wins.
While I disagree that one-boxing still wins, I’m most interested in seeing the “no future peeking” and the actual Omega success rate being defined as givens. It’s important that I can rely on the 99.9% value, rather than wondering whether it is perhaps inferred from their past 100 correct predictions (which could, with a non-negligible probability, have been a fluke).
That does indeed seem like the standard version of Newcomb’s. (Though I don’t understand your last sentence, assuming “non-negligible” does not mean 1⁄2 to the power of 100.)
Can you spell out what you mean by “if” in this context? Because a lot of us are explicitly talking about the best algorithm to program into an AI.
Why is it important to you that the success rate be a frequentialist probability rather than just a bayesian one?
I’m not sure I understand correctly, but let me phrase the question differently: what sort of confidence do we have in “99.9%” being an accurate value for Omega’s success rate?
From your previous comment I gather the confidence is absolute. This removes one complication while leaving the core of the paradox intact. I’m just pointing out that this isn’t very clear in the original specification of the paradox, and that clearing it up is useful.
To explain why it’s important, let me indeed think of an AI like hairyfigment suggested. Suppose someone says they have let 100 previous AIs flip a fair coin 100 times each and it came out heads every single time, because they have magic powers that make it so. This someone presents me video evidence of this feat.
If faced with this in the real world, an AI coded by me would still bet close to 50% on tails if offered to flip its own fair coin against this person, because I have strong evidence that this someone is a cheat, and their video evidence is fake. Just something I know from a huge amount of background information that was not explicitly part of this scenario.
However, when discussing such scenarios, it is sometimes useful to assume hypothetical scenarios unlike the real world. For example, we could state that this someone has actually performed the feat, and that there is absolutely no doubt about that. That’s impossible in our real world, but it’s useful for the sake of discussing bayesianism. Surely any bayesianist’s AI would expect heads with high probability in this hypothetical universe.
So, are we looking at “Omega in the real world where someone I don’t even know tells me they are really damn good at predicting the future”, or “Omega in some hypothetical world where they are actually known with absolute certainty to be really good at predicting the future”?
Seems to me the language of this rules out faked video. And to explain it as a newsletter scam would, I think, require postulating 2^100 civilizations that have contact with Omega but not each other. Note that we already have some reason to believe that a powerful and rational observer could predict our actions early on.
So you tell me what we should expect here.
I’ve reviewed the language of the original statement and it seems that the puzzle is set in essentially the real world with two major givens, i.e. facts in which you have 100% confidence.
Given #1: Omega was correct on the last 100 occurrences.
Given #2: Box B is already empty or already full.
There is no leeway left for quantum effects, or for your choice affecting in any way what’s in box B. You cannot make box B full by consciously choosing to one-box. The puzzle says so, after all.
If you read it like this, then I don’t see why you would possibly one-box. Given #2 already implies the solution. 100 successful predictions must have been achieved through a very low probability event, or a trick, e.g by offering the bet only to those people whose answer you can already predict, e.g. by reading their LessWrong posts.
If you don’t read it like this, then we’re back to the “gooey vagueness” problem, and I will once again insist that the puzzle needs to be fully defined before it can be attempted. For example, by removing both givens, and instead specifying exactly what you know about those past 100 occurrences. Were they definitely not done on plants? Was there sampling bias? Am I considering this puzzle as an outside observer, or am I imagining myself being part of that universe—in the latter case I have to put some doubt into everything, as I can be hallucinating. These things matter.
With such clarifications, the puzzle becomes a matter of your confidence in the past statistics vs. your confidence about the laws of physics precluding your choice from actually influencing what’s in box B.