The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
The inputs appear to be highly repeatable and consistent with each other.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?
Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.
See also my other reply to your other reply (heh). I think I’m piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).
why assume that something does, unless it’s an accurate assumption (i.e. testable, tested and confirmed)?
Because there are stable relationships between outputs (actions) and inputs. We all test that hypothesis multiple times a day.
The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?
Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.
See also my other reply to your other reply (heh). I think I’m piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).