No, I’m not suggesting that. I think the statement stays true even if you’ve already seen 100 million quantum coinflips and they looked “fair”. The universal prior still thinks that switching to a more ordered generator for the next million coinflips is more likely than continuing with the random generator, because at that point the algorithmic complexity of preceding coinflips is already “sunk” anyway, and the algorithmic complexity of switching universes is just a small constant.
ETA: after doing some formal calculations I’m no longer so sure of this. Halp.
I’m confused. Assuming that I “believe in” the validity of what I have been told of quantum mechanics, I fully expect that a million quantum coin tosses will generate an incompressible string. Are you suggesting that I cannot simultaneously believe in the validity of QM and also believe in the efficacy of Solomonoff induction—when applied to data which is “best explained” as causally random?
Off the top of my head, I am inclined to agree with this suggestion, which in turn suggests that Si is flawed. We need a variant of Si which allows Douglas_Knight’s simple fair coins, without thereby offering a simple explanation of everything. Or, we need to discard the whole Si concept as inappropriate in our non-deterministic universe.
The randomness of a source of information is not an empirical fact which we can discover and test—rather, it is an assumption that we impose upon our model of the data. It is a null hypothesis for which we cannot find Bayesian evidence—we can at best fail to reject it. (I hope the Popper-clippers don’t hear me say that!). Maybe what our revised Si should be looking for is the simplest explanation for data D[0] thru D[n], which explanation is not refuted by data D[n+1] thru D[n+k].
ETA: Whoops. That suggestion doesn’t work. The simplest such explanation will always be that everything is random.
Off the top of my head, I am inclined to agree with this suggestion, which in turn suggests that Si is flawed. We need a variant of Si which allows Douglas_Knight’s simple fair coins, without thereby offering a simple explanation of everything. Or, we need to discard the whole Si concept as inappropriate in our non-deterministic universe.
I don’t think the universe shows any signs of being non-deterministic. The laws of physics as we understand them (e.g. the wave equation) are deterministic. So, Solomonoff induction is not broken.
Hmmm. Would you be happier if I changed my last line to read ”… we need to discard the whole Si concept as inappropriate to our imperfectly-observed universe.”?
I don’t think so. Solomonoff induction applies to streams. The most common application is to streams of sense data. There is no pretense of somehow observing the whole of the universe in the first place.
You are correct that my comments are missing the mark. Still, there is a sense in which the kinds of non-determinism represented by Born probabilities present problems for Si. I agree that Si definitely does not pretend to generate its predictions based on observation of the whole universe. And it does not pretend to predict everything about the universe. But it does seem to pretend that it is doing something better than making predictions that apply to only one of many randomly selected “worlds”.
Can anyone else—Cousin_it perhaps—explain why deterministic evolution of the wave function seems to be insufficient to place Si on solid ground?
You are correct that my comments are missing the mark. Still, there is a sense in which the kinds of non-determinism represented by Born probabilities present problems for Si.
They would represent problems for determinism—if they were “real” probabailities. However the idea around here is that probabilities are in the mind.
It is a commonly heard statement that probabilities calculated within a pure state have a different character than the probabilities with which different pure states appear in a mixture, or density matrix. As Pauli put it, the former represents “Eine prinzipielle Unbestimmtheit, nicht nur Unbekanntheit” *. But this viewpoint leads to so many paradoxes and mysteries that we explore the consequences of the unified view, that all probability signifies only incomplete human information.
[*] Translation: “A fundamental uncertainty, not only obscurity”
Why do you say this? Are you merely suggesting that my prior experience with quantum coins is << a million tosses?
No, I’m not suggesting that. I think the statement stays true even if you’ve already seen 100 million quantum coinflips and they looked “fair”. The universal prior still thinks that switching to a more ordered generator for the next million coinflips is more likely than continuing with the random generator, because at that point the algorithmic complexity of preceding coinflips is already “sunk” anyway, and the algorithmic complexity of switching universes is just a small constant.
ETA: after doing some formal calculations I’m no longer so sure of this. Halp.
I’m confused. Assuming that I “believe in” the validity of what I have been told of quantum mechanics, I fully expect that a million quantum coin tosses will generate an incompressible string. Are you suggesting that I cannot simultaneously believe in the validity of QM and also believe in the efficacy of Solomonoff induction—when applied to data which is “best explained” as causally random?
Off the top of my head, I am inclined to agree with this suggestion, which in turn suggests that Si is flawed. We need a variant of Si which allows Douglas_Knight’s simple fair coins, without thereby offering a simple explanation of everything. Or, we need to discard the whole Si concept as inappropriate in our non-deterministic universe.
The randomness of a source of information is not an empirical fact which we can discover and test—rather, it is an assumption that we impose upon our model of the data. It is a null hypothesis for which we cannot find Bayesian evidence—we can at best fail to reject it. (I hope the Popper-clippers don’t hear me say that!). Maybe what our revised Si should be looking for is the simplest explanation for data D[0] thru D[n], which explanation is not refuted by data D[n+1] thru D[n+k].
ETA: Whoops. That suggestion doesn’t work. The simplest such explanation will always be that everything is random.
Possibly ironically relevant. Eliezer quoting Robyn Dawes:
I don’t think the universe shows any signs of being non-deterministic. The laws of physics as we understand them (e.g. the wave equation) are deterministic. So, Solomonoff induction is not broken.
Hmmm. Would you be happier if I changed my last line to read ”… we need to discard the whole Si concept as inappropriate to our imperfectly-observed universe.”?
I don’t think so. Solomonoff induction applies to streams. The most common application is to streams of sense data. There is no pretense of somehow observing the whole of the universe in the first place.
You are correct that my comments are missing the mark. Still, there is a sense in which the kinds of non-determinism represented by Born probabilities present problems for Si. I agree that Si definitely does not pretend to generate its predictions based on observation of the whole universe. And it does not pretend to predict everything about the universe. But it does seem to pretend that it is doing something better than making predictions that apply to only one of many randomly selected “worlds”.
Can anyone else—Cousin_it perhaps—explain why deterministic evolution of the wave function seems to be insufficient to place Si on solid ground?
They would represent problems for determinism—if they were “real” probabailities. However the idea around here is that probabilities are in the mind.
Here is E T Jaynes on the topic:
[*] Translation: “A fundamental uncertainty, not only obscurity”
Quantum physics doesn’t say they’re random per se. It says that every sequence happens, and you only observe one of them.