The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
The inputs appear to be highly repeatable and consistent with each other.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?
The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?