However, quantum physics suggests that Random Number oracles, in fact, might be real; i.e. that there are sources of true randomness in the Universe.
They are not the same kind of random. A random oracle needs to answer the same query with the same number, while tipically QM does the opposite of that.
his is QM interpretation-dependent, of course, but any deterministic, non-random interpretation of quantum mechanics involves things like faster-than light interaction etc., which frankly are much less intuitive.
An interpretation is just an ontological statement placed upon a model defined by equations. This means that by definition, if the QM we are examining doesn’t have FTL interactions, neither the interpretation will have.
Only a huge amount of proper scientific tests could convince me to change this conclusion.
This is odd. What you just said is: “a finite amount of evidence will convince me that an infinite amount of complexity is more probable than a finite amount.”
A random oracle needs to answer the same query with the same number
Randomness is still uncomputable even if we relax this requirement. You’re right by pointing out that the formal definition of a random oracle includes this requirement, but I don’t think it’s essential to this discussion. Can someone confirm?
This is odd. What you just said is: “a finite amount of evidence will convince me that an infinite amount of complexity is more probable than a finite amount.”
It has infinite complexity only in the formal model (SI + universal prior). I can choose not to believe in the formal model itself.
In fact, I should choose to not to give 100% belief probability to ANY formal model. Blind subscription to the idea that some beliefs should have 100% probability leads into trouble. For example, this is the problem with Hume’s argument against miracles: “As the evidence for a miracle is always limited, as miracles are single events, occurring at particular times and places, the evidence for the miracle will always be outweighed by the evidence against. ” If we assume that there NEVER be evidence strong enough to support the fact that a miracle (i.e. transgression of physical laws) has happened, the we’ll never have to update our understanding of physical laws. Which is just wrong—as the history of physics tells, our understanding of physical laws is updated all the time.
By the way, to clarify a potential misunderstanding.
Under S.I. it is not extremely unlikely that we are in one of the high complexity universes. This is because the 2^-l probability is the prior probability of an individual program of length l, not the probability of all programs of length l cumulative. There’s 2^l possible bit strings of length l . (The programs themselves determine their length by stopping making use of the input tape, so any program of length l is also two programs of length l+2 . Speaking of the program lengths thus gets very confusing quickly).
There’s entirely too many ways to formalize S.I. which are basically equivalent, which makes it really difficult to discuss. I standardize on “random bits on a read-only input tape, write only output tape, and a single work tape initialized with zeroes” model, with the probability of a specific output string s being the prior for s . The machine M must be such that for any other machine M’ you can make a prefix such that putting that prefix on the beginning of the tape for M makes it exactly equivalent to M’ . (the prefix works as an emulator, i.e. a program for the machine M’, with this prefix, will run on M exactly as if it was M’). Very concise description, and you can get all the things like 2^-l out of that.
They are not the same kind of random. A random oracle needs to answer the same query with the same number, while tipically QM does the opposite of that.
An interpretation is just an ontological statement placed upon a model defined by equations. This means that by definition, if the QM we are examining doesn’t have FTL interactions, neither the interpretation will have.
This is odd. What you just said is: “a finite amount of evidence will convince me that an infinite amount of complexity is more probable than a finite amount.”
Randomness is still uncomputable even if we relax this requirement. You’re right by pointing out that the formal definition of a random oracle includes this requirement, but I don’t think it’s essential to this discussion. Can someone confirm?
It has infinite complexity only in the formal model (SI + universal prior). I can choose not to believe in the formal model itself.
In fact, I should choose to not to give 100% belief probability to ANY formal model. Blind subscription to the idea that some beliefs should have 100% probability leads into trouble. For example, this is the problem with Hume’s argument against miracles: “As the evidence for a miracle is always limited, as miracles are single events, occurring at particular times and places, the evidence for the miracle will always be outweighed by the evidence against. ” If we assume that there NEVER be evidence strong enough to support the fact that a miracle (i.e. transgression of physical laws) has happened, the we’ll never have to update our understanding of physical laws. Which is just wrong—as the history of physics tells, our understanding of physical laws is updated all the time.
By the way, to clarify a potential misunderstanding.
Under S.I. it is not extremely unlikely that we are in one of the high complexity universes. This is because the 2^-l probability is the prior probability of an individual program of length l, not the probability of all programs of length l cumulative. There’s 2^l possible bit strings of length l . (The programs themselves determine their length by stopping making use of the input tape, so any program of length l is also two programs of length l+2 . Speaking of the program lengths thus gets very confusing quickly).
You mean “also two programs of length l+1”, right?
I think this comment by gjm addresses the “longer programs are as likely” idea: http://lesswrong.com/r/discussion/lw/jhm/understanding_and_justifying_solomonoff_induction/adfc
Yeah, that was a typo.
There’s entirely too many ways to formalize S.I. which are basically equivalent, which makes it really difficult to discuss. I standardize on “random bits on a read-only input tape, write only output tape, and a single work tape initialized with zeroes” model, with the probability of a specific output string s being the prior for s . The machine M must be such that for any other machine M’ you can make a prefix such that putting that prefix on the beginning of the tape for M makes it exactly equivalent to M’ . (the prefix works as an emulator, i.e. a program for the machine M’, with this prefix, will run on M exactly as if it was M’). Very concise description, and you can get all the things like 2^-l out of that.