Of course there is something external to our minds, which we all experience. …
Experts in the field provided prescriptions, called laws, which let you predict some future inputs, with varying success.
I’m not sure I understand your point of view, given these two statements. If experts in the field are able to predict future inputs with a reasonably high degree of certainty; and if we agree that these inputs are external to our minds; is it not reasonable to conclude that such experts have built an approximate mental model of at least a small portion of whatever it is that causes the inputs ? Or are you asserting that they just got lucky ?
Sorry for the newbie question, I’m late to this discussion and am probably missing a lot of context...
I’m making similar queries here, since this intrigues me and I was similarly confused by the non-postulate. Maybe between all the cross-interrogations we’ll finally understand what schminux is saying ;)
The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
The inputs appear to be highly repeatable and consistent with each other.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?
Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.
See also my other reply to your other reply (heh). I think I’m piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).
I’m not sure I understand your point of view, given these two statements. If experts in the field are able to predict future inputs with a reasonably high degree of certainty; and if we agree that these inputs are external to our minds; is it not reasonable to conclude that such experts have built an approximate mental model of at least a small portion of whatever it is that causes the inputs ? Or are you asserting that they just got lucky ?
Sorry for the newbie question, I’m late to this discussion and am probably missing a lot of context...
I’m making similar queries here, since this intrigues me and I was similarly confused by the non-postulate. Maybe between all the cross-interrogations we’ll finally understand what schminux is saying ;)
why assume that something does, unless it’s an accurate assumption (i.e. testable, tested and confirmed)?
Because there are stable relationships between outputs (actions) and inputs. We all test that hypothesis multiple times a day.
The inputs appear to be highly repeatable and consistent with each other. This could be purely due to chance, of course, but IMO this is less likely than the inputs being interdependent in some way.
Some are and some aren’t. When a certain subset of them is, I am happy to use a model that accurately predicts what happens next. If there is a choice, then the most accurate and simplest model. However, I am against extrapolating this approach into “there is this one universal thing that determines all inputs ever”.
What is the alternative, though ? Over time, the trend in science has been to unify different groups of inputs; for example, electricity and magnetism were considered to be entirely separate phenomena at one point. So were chemistry and biology, or electricity and heat, etc. This happens all the time on smaller scales, as well; and every time it does, is it not logical to update your posterior probability of that “one universal thing” being out there to be a little bit higher ?
And besides, what is more likely: that 10 different groups of inputs are consistent and repeatable due to N reasons, or due to a single reason ?
Intuitively, to me at least, it seems simpler to assume that everything has a cause, including the regularity of experimental results, and that a mathematical algorithm being computed with the outputs resulting in what we perceive as inputs / experimental results is simpler as a cause than randomness, magic, or nothingness.
See also my other reply to your other reply (heh). I think I’m piecing together your description of things now. I find your consistency with it rather admirable (and very epistemologically hygienic, I might add).