We can’t say. They are hidden; all our hypotheses about them would be unfalsifiable. Moreover, the fundamentally random and hidden variables viewpoints are indistinguishable by experiment, so choosing one is a matter of convenience, not absolute truth.
I’m not asking if the hypothesis is testable which is a different matter. Obviously it’s impossible to distinguish pseudo-randomness from randomness, if it’s done properly. But what you are suggesting is that even if it is random, it can still be thought of as a deterministic process with seemingly random but fixed hidden variables.
I’m asking how that is different than true randomness. A hidden variable in a causal graph, that itself has no cause, is for all intents and purposes “random”. In fact that’s probably how I would formally define randomness if I had to.
If some simple deterministic algorithm is setting all these hidden variables that’s a different hypothesis. But if they have no cause, and you have all these variables which can have totally arbitrary values for no reason, then that’s randomness.
I don’t really think it matters which is why I don’t care that it’s a testable hypothesis. But for some people like OP believe it’s really important which is how this issue came up.
Hidden variables aren’t random; they are fixed, but unknown. Maybe we are using different definitions of randomness here. Yet I can’t see why you are comfortable with a hidden deterministic algorithm setting hidden variables; wouldn’t such an algorithm itself be random by your definition?
There is no point in arguing, which of the hypotheses producing the same results is “really true”. We should just pick the simplest one according to the Occam razor. But the simplest hypothesis isn’t just the one which involves less objects (like hidden variables), but rather, the one for which our theories fit with minimal stretch. If you agree with the interpretation of probabilities as a measure of uncertainty, then it’s simpler to use the fundamentally random processes interpretation which fits into this framework—the one with hidden variables.
I just don’t see any distinction between a hidden variable and a random variable. That it’s fixed has nothing to do with anything. It’s the difference between having a random number generator inside your program, or having a deterministic program which is called with a bunch of randomly generated arguments.
Either way you still have to ask the question of where the numbers are coming from, and if they are truly random. If they are the result of some simple deterministic algorithm. If we could, at least in principle, predict it with total accuracy, or if it’s impossible to predict no matter how much computational power we have.
And I do think there is a practical consequence of it. As you mention, Occam’s razor favor’s simpler hypotheses. If your hypothesis has a huge number of variables that can have arbitrary values, it has far more complexity than a hypothesis that allows for a random number generator.
We can’t say. They are hidden; all our hypotheses about them would be unfalsifiable. Moreover, the fundamentally random and hidden variables viewpoints are indistinguishable by experiment, so choosing one is a matter of convenience, not absolute truth.
I’m not asking if the hypothesis is testable which is a different matter. Obviously it’s impossible to distinguish pseudo-randomness from randomness, if it’s done properly. But what you are suggesting is that even if it is random, it can still be thought of as a deterministic process with seemingly random but fixed hidden variables.
I’m asking how that is different than true randomness. A hidden variable in a causal graph, that itself has no cause, is for all intents and purposes “random”. In fact that’s probably how I would formally define randomness if I had to.
If some simple deterministic algorithm is setting all these hidden variables that’s a different hypothesis. But if they have no cause, and you have all these variables which can have totally arbitrary values for no reason, then that’s randomness.
I don’t really think it matters which is why I don’t care that it’s a testable hypothesis. But for some people like OP believe it’s really important which is how this issue came up.
Hidden variables aren’t random; they are fixed, but unknown. Maybe we are using different definitions of randomness here. Yet I can’t see why you are comfortable with a hidden deterministic algorithm setting hidden variables; wouldn’t such an algorithm itself be random by your definition?
There is no point in arguing, which of the hypotheses producing the same results is “really true”. We should just pick the simplest one according to the Occam razor. But the simplest hypothesis isn’t just the one which involves less objects (like hidden variables), but rather, the one for which our theories fit with minimal stretch. If you agree with the interpretation of probabilities as a measure of uncertainty, then it’s simpler to use the fundamentally random processes interpretation which fits into this framework—the one with hidden variables.
I just don’t see any distinction between a hidden variable and a random variable. That it’s fixed has nothing to do with anything. It’s the difference between having a random number generator inside your program, or having a deterministic program which is called with a bunch of randomly generated arguments.
Either way you still have to ask the question of where the numbers are coming from, and if they are truly random. If they are the result of some simple deterministic algorithm. If we could, at least in principle, predict it with total accuracy, or if it’s impossible to predict no matter how much computational power we have.
And I do think there is a practical consequence of it. As you mention, Occam’s razor favor’s simpler hypotheses. If your hypothesis has a huge number of variables that can have arbitrary values, it has far more complexity than a hypothesis that allows for a random number generator.