Would that make you a supersuperintelligence? Since I presume by “picking randomly” you mean randomly to Omega, in other words Omega cannot find and process enough information to predict you well.
The definition of omega as something that can predict your actions leads it to have some weird powers. You could pick a box based on the outcome of a quantum event with a 50% chance, then omega would have to vanish in a puff of physical implausibility.
I suspect Omega would know you were going to do that, and would be able to put the box in a superposition dependent on the same quantum event, so that in the branches where you 1-box, box B contains $1million, and where you 2-box it’s empty.
What’s wrong with Omega predicting a “quantum event”? “50% chance” is not an objective statement, and it may well be that Omega can predict quantum events. (If not, can you explain why not, or refer me to an explanation?)
“In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function (sometimes referred to as orbitals in the case of atomic electrons), and more generally, elements of a complex vector space.[9] This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments.”
This is the best formalism we have for predicting things at this scale and it only spits out probabilities. I would be surprised if something did a lot better!
As I understand it, probabilities are observed because there are observers in two different amplitude blobs of configuration space (to use the language of the quantum physics sequence) but “the one we are in” appears to be random to us. And mathematically I think quantum mechanics is the same under this view in which there is no “inherent, physical” randomness (so it would still be the best formalism we have for predicting things).
Could you say what “physical randomness” could be if we don’t allow reference to quantum mechanics? (i.e. is that the only example? and more to the point, does the notion make any sense?)
The new argument is about whether there might be inherently unpredictable things. If not, then your picking a box based on the outcome of a “quantum event” shouldn’t make Omega any less physically plausible,
What I didn’t understand is why you removed quantum experiments from the discussion. I believe it is very plausible to have something that is physically unpredictable, as long as the thing doing the predicting is bound by the same laws as what you are trying to predict.
Consider a world made of reversible binary gates with the same number of inputs as outputs (that is every input has a unique output, and vice versa).
We want to predict one complex gate. Not a problem, just clone all the inputs and copy the gate. However you have to do that only using reversible binary gates. Lets start with cloning the bits.
In is what you are trying to copy without modifying so that you can predict what affect it will have on the rest of the system. You need a minimum of two outputs, so you need another input B.
You get to create the gate in order to copy the bit and predict the system. The ideal truth table looks something like
In | B | Out | Copy
0 | 0 | 0 | 0
0 | 1 | 0 | 0
1 | 0 | 1 | 1
1 | 1 | 1 | 1
This violates our reversibility assumption. The best copier we could make is
In | B | Out | Copy
0 | 0 | 0 | 0
0 | 1 | 1 | 0
1 | 0 | 0 | 1
1 | 1 | 1 | 1
This copies precisely, but mucks up the output making our copy useless for prediction. If you could control B, or knew the value of B then we could correct the Output. But as I have shown here finding out the value of a bit is non-trivial. The best we could do would be to find sources of bits with statistically predictable properties then use them for duplicating other bits.
The world is expected to be reversible, and the no cloning theorem applies to reality which I think is stricter than my example. However I hope I have shown how a simple lawful universe can be hard to predict by something inside it.
In short, stop thinking of yourself (and Omega) as an observer outside physics that does not interact with the world. Copying is disturbing.
I believe it is very plausible to have something that is physically unpredictable, as long as the thing doing the predicting is bound by the same laws as what you are trying to predict.
[attempted proof omitted]
I hope I have shown how a simple lawful universe can be hard to predict by something inside it.
In short, stop thinking of yourself (and Omega) as an observer outside physics that does not interact with the world. Copying is disturbing.
Even though I do not have time to reflect on the attempted proof and even though the attempted proof is best described as a stab at a sketch of a proof and even though this “reversible logic gates” approach to a proof probably cannot be turned into an actual proof and even though Nick Tarleton just explained why the “one box or two box depending on an inherently unpredictable event” strategy is not particularly relevant to Newcomb’s, I voted this up and I congratulate the author (whpearson) because it is an attempt at an original proof of something very cool (namely, limits to an agent’s ability to learn about its environment) and IMHO probably relevant to the Friendliness project. More proofs and informed stabs at proofs, please!
Would that make you a supersuperintelligence? Since I presume by “picking randomly” you mean randomly to Omega, in other words Omega cannot find and process enough information to predict you well.
Otherwise what does “picking randomly” mean?
The definition of omega as something that can predict your actions leads it to have some weird powers. You could pick a box based on the outcome of a quantum event with a 50% chance, then omega would have to vanish in a puff of physical implausibility.
I suspect Omega would know you were going to do that, and would be able to put the box in a superposition dependent on the same quantum event, so that in the branches where you 1-box, box B contains $1million, and where you 2-box it’s empty.
Exactly what I was thinking.
What’s wrong with Omega predicting a “quantum event”? “50% chance” is not an objective statement, and it may well be that Omega can predict quantum events. (If not, can you explain why not, or refer me to an explanation?)
From wikipedia
“In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function (sometimes referred to as orbitals in the case of atomic electrons), and more generally, elements of a complex vector space.[9] This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments.”
This is the best formalism we have for predicting things at this scale and it only spits out probabilities. I would be surprised if something did a lot better!
As I understand it, probabilities are observed because there are observers in two different amplitude blobs of configuration space (to use the language of the quantum physics sequence) but “the one we are in” appears to be random to us. And mathematically I think quantum mechanics is the same under this view in which there is no “inherent, physical” randomness (so it would still be the best formalism we have for predicting things).
Could you say what “physical randomness” could be if we don’t allow reference to quantum mechanics? (i.e. is that the only example? and more to the point, does the notion make any sense?)
You seem to have transitioned to another argument here… please clarify what this has to do with omega and its ability to predict your actions.
The new argument is about whether there might be inherently unpredictable things. If not, then your picking a box based on the outcome of a “quantum event” shouldn’t make Omega any less physically plausible,
What I didn’t understand is why you removed quantum experiments from the discussion. I believe it is very plausible to have something that is physically unpredictable, as long as the thing doing the predicting is bound by the same laws as what you are trying to predict.
Consider a world made of reversible binary gates with the same number of inputs as outputs (that is every input has a unique output, and vice versa).
We want to predict one complex gate. Not a problem, just clone all the inputs and copy the gate. However you have to do that only using reversible binary gates. Lets start with cloning the bits.
In is what you are trying to copy without modifying so that you can predict what affect it will have on the rest of the system. You need a minimum of two outputs, so you need another input B.
You get to create the gate in order to copy the bit and predict the system. The ideal truth table looks something like
In | B | Out | Copy
0 | 0 | 0 | 0
0 | 1 | 0 | 0
1 | 0 | 1 | 1
1 | 1 | 1 | 1
This violates our reversibility assumption. The best copier we could make is
In | B | Out | Copy
0 | 0 | 0 | 0
0 | 1 | 1 | 0
1 | 0 | 0 | 1
1 | 1 | 1 | 1
This copies precisely, but mucks up the output making our copy useless for prediction. If you could control B, or knew the value of B then we could correct the Output. But as I have shown here finding out the value of a bit is non-trivial. The best we could do would be to find sources of bits with statistically predictable properties then use them for duplicating other bits.
The world is expected to be reversible, and the no cloning theorem applies to reality which I think is stricter than my example. However I hope I have shown how a simple lawful universe can be hard to predict by something inside it.
In short, stop thinking of yourself (and Omega) as an observer outside physics that does not interact with the world. Copying is disturbing.
Even though I do not have time to reflect on the attempted proof and even though the attempted proof is best described as a stab at a sketch of a proof and even though this “reversible logic gates” approach to a proof probably cannot be turned into an actual proof and even though Nick Tarleton just explained why the “one box or two box depending on an inherently unpredictable event” strategy is not particularly relevant to Newcomb’s, I voted this up and I congratulate the author (whpearson) because it is an attempt at an original proof of something very cool (namely, limits to an agent’s ability to learn about its environment) and IMHO probably relevant to the Friendliness project. More proofs and informed stabs at proofs, please!