I’m honestly not sure. I find myself confused. According to the article, they say:
They also point out that it would only take one counterexample to falsify their idea – a use of classical probabilities that is clearly isolated from the physical, quantum world.
But what would that look like exactly? Naively, it seems like the robot that flips the coin heads every time satisfies this (classical probability: ~1). Or maybe it uses a pseudo-random number generator to determine what’s going to come up next and flips the coin that particular way and then we bet on the next flip (constituting “a use of classical probabilities that is clearly isolated from the physical, quantum world”). But presumably that’s not what they mean. What counterexample would they want, then?
The authors claim that all uncertainty is quantum. A machine that flips heads 100% of the time doesn’t falsify their claim (no uncertainty), and neither does a machine that flips heads 99% of the time (they’d claim it’s quantum uncertainty). As for a machine that follows a pseudorandom bit sequence, I believe they would argue that a quantum process (like human thought) produced the seed. Indeed, they argue that our uncertainty about the n-th digit of pi is quantum uncertainty because if you want to bet on the n-th digit of pi, you have to randomly choose n somehow.
If they’re saying all sources of entropy are physical, that seems obvious. If they’re saying that all uncertainty is quantum, they must not know that chaotic classical simulations exist? Or are they not allowing simulations made by humans o.O
They’re saying all uncertainty is quantum. If you run a computer program whose outputs is very sensitive to its inputs, they’d probably say that the inputs are influenced by quantum phenomena outside the computer. Don’t ask me to defend the idea, I think it’s incorrect :)
Well, you can run things like physics engines on a computer, and their output is not quantum in any meaningful way (following deterministic rules fairly reliably). It’s not very hard to simulate systems where a small uncertainty in initial conditions is magnified very quickly, and this increase in randomness can’t really be attributed to quantum effects but can be described very well by probability. This seems to contradict their thesis that all use of probability to describe randomness is justified only by quantum mechanics.
I think there seems to be a mismatch of terms involved. Ontological probability, or propensity, and epistemological probability, or uncertainty, are being confused. Reading over this discussion, I have seen claims that something called “chaotic randomness” is at work, where uncertainty results from chaotic systems because the results are so sensitive to initial conditions, but that’s not ontological probability at all.
The claim of the paper is that all actual randomness, and thus ontological probability, is a result of quantum decoherence and recoherence in both chaotic and simple systems. Uncertainty is uninvolved, though uncertainty in chaotic systems appears to be random.
That said, I believe the hypothesis is correct simply because it is the simplest explanation for randomness I’ve seen.
We argue using simple models that all successful practical uses of probabilities originate in quantum fluctuations in the microscopic physical world around us, often propagated to macroscopic
scales
Their argument is that not only is quantum mechanics ontologically probabilistic, but that only ontologically probabilistic things can be successfully described by probabilities. This is obviously false (not to mention that nothing has actually been shown to be ontologically probabilistic in the first place).
Thus we claim there is no physically verified fully classical theory of probability.
They think they can get away with this claim because it can’t even be tested in a quantum world. But you can still make classical simulations and see if probability works as it should, and it’s obvious that it does. Their only argument is that it’s simpler for probability to be entirely quantum, but they fail to consider situations where quantum effects do not actually affect the system (which we can simulate and test).
I don’t think they refer to Bayesian probability as probability. The abstract is ill-defined (according to LessWrong’s operational definitions), but their point about ontological probabilities originating in quantum mechanics remains. It, I think, remains intertwined with multiverse theories, as multiverse theories seem to explain probability in a very similar sense, but not in as many words or with such great claims.
Also, in a classical simulation, I would not see probability working as it should to be obvious at all. In fact, it’s quite difficult to imagine an actually classical system that also contains randomness. It could be that the childhood explanations of physical systems in classical terms while seeing randomness as present is clouding the issue.
Whichever way. I don’t think it’s really worth much argument. Just as a basis in probability theory.
I’m honestly not sure. I find myself confused. According to the article, they say:
But what would that look like exactly? Naively, it seems like the robot that flips the coin heads every time satisfies this (classical probability: ~1). Or maybe it uses a pseudo-random number generator to determine what’s going to come up next and flips the coin that particular way and then we bet on the next flip (constituting “a use of classical probabilities that is clearly isolated from the physical, quantum world”). But presumably that’s not what they mean. What counterexample would they want, then?
The authors claim that all uncertainty is quantum. A machine that flips heads 100% of the time doesn’t falsify their claim (no uncertainty), and neither does a machine that flips heads 99% of the time (they’d claim it’s quantum uncertainty). As for a machine that follows a pseudorandom bit sequence, I believe they would argue that a quantum process (like human thought) produced the seed. Indeed, they argue that our uncertainty about the n-th digit of pi is quantum uncertainty because if you want to bet on the n-th digit of pi, you have to randomly choose n somehow.
If they’re saying all sources of entropy are physical, that seems obvious. If they’re saying that all uncertainty is quantum, they must not know that chaotic classical simulations exist? Or are they not allowing simulations made by humans o.O
They’re saying all uncertainty is quantum. If you run a computer program whose outputs is very sensitive to its inputs, they’d probably say that the inputs are influenced by quantum phenomena outside the computer. Don’t ask me to defend the idea, I think it’s incorrect :)
Chaotic classical simulations? Could you elaborate?
Well, you can run things like physics engines on a computer, and their output is not quantum in any meaningful way (following deterministic rules fairly reliably). It’s not very hard to simulate systems where a small uncertainty in initial conditions is magnified very quickly, and this increase in randomness can’t really be attributed to quantum effects but can be described very well by probability. This seems to contradict their thesis that all use of probability to describe randomness is justified only by quantum mechanics.
I think there seems to be a mismatch of terms involved. Ontological probability, or propensity, and epistemological probability, or uncertainty, are being confused. Reading over this discussion, I have seen claims that something called “chaotic randomness” is at work, where uncertainty results from chaotic systems because the results are so sensitive to initial conditions, but that’s not ontological probability at all.
The claim of the paper is that all actual randomness, and thus ontological probability, is a result of quantum decoherence and recoherence in both chaotic and simple systems. Uncertainty is uninvolved, though uncertainty in chaotic systems appears to be random.
That said, I believe the hypothesis is correct simply because it is the simplest explanation for randomness I’ve seen.
Their argument is that not only is quantum mechanics ontologically probabilistic, but that only ontologically probabilistic things can be successfully described by probabilities. This is obviously false (not to mention that nothing has actually been shown to be ontologically probabilistic in the first place).
They think they can get away with this claim because it can’t even be tested in a quantum world. But you can still make classical simulations and see if probability works as it should, and it’s obvious that it does. Their only argument is that it’s simpler for probability to be entirely quantum, but they fail to consider situations where quantum effects do not actually affect the system (which we can simulate and test).
I don’t think they refer to Bayesian probability as probability. The abstract is ill-defined (according to LessWrong’s operational definitions), but their point about ontological probabilities originating in quantum mechanics remains. It, I think, remains intertwined with multiverse theories, as multiverse theories seem to explain probability in a very similar sense, but not in as many words or with such great claims.
Also, in a classical simulation, I would not see probability working as it should to be obvious at all. In fact, it’s quite difficult to imagine an actually classical system that also contains randomness. It could be that the childhood explanations of physical systems in classical terms while seeing randomness as present is clouding the issue.
Whichever way. I don’t think it’s really worth much argument. Just as a basis in probability theory.