In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it.
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.
In addition, I would really like to address the fact that current models can be used to predict future inputs in areas that are thus far completely unobserved. IIRC, this is how positrons were discovered, for example. If all we have are disconnected inputs, how do we explain the fact that even those inputs which we haven’t even thought of observing thus far, still do correlate to our models ? We would expect to see this if both sets of inputs were contingent upon some shared node higher up in the Bayesian network, but we wouldn’t expect to see this (except by chance, which is infinitesmally low) if the inputs were mutually independent.
FWIW, my understanding of shminux’s account does not assert that “all we have are disconnected inputs,” as inputs might well be connected.
That said, it doesn’t seem to have anything to say about how inputs can be connected, or indeed about how inputs arise at all, or about what they are inputs into. I’m still trying to wrap my brain around that part.
ETA: oops. I see shminux already replied to this. But my reply is subtly different, so I choose to leave it up.
I don’t see how someone could admit that their inputs are connected in the sense of being caused by a common source that orders. them without implicitly admitting to a real external world.
Nor do I.
But I acknowledge that saying inputs are connected in the sense that they reliably recur in particular patterns, and saying that inputs are connected in the sense of being caused by a common source that orders them, are two distinct claims, and one might accept that the former is true (based on observation) without necessarily accepting that the latter is true.
I don’t have a clear sense of what such a one might then say about how inputs come to reliably recur in particular patterns in the first place, but often when I lack a clear sense of how X might come to be in the absence of Y, it’s useful to ask “How, then, does X come to be?” rather than to insist that Y must be present.
One can of course only say that inputs have occurred in patterns up till now. Realists can explain why they would continue to do so on the basis of the Common Source meta-model, anti realists cannot.
At the risk of repeating myself: I agree that I don’t currently understand how an instrumentalist could conceivably explain how inputs come to reliably recur in particular patterns. You seem content to conclude thereby that they cannot explain such a thing, which may be true. I am not sufficiently confident in the significance of my lack of understanding to conclude that just yet.
ie, realism explain how you can predict at all.
This seems to me to be the question of origin “where do the inputs come from?” in yet another disguise. The meta-model is that it is possible to make accurate models, without specifying the general mechanism (e.g. “external reality”) responsible for it. I think this is close to subjective Bayesianism, though I’m not 100% sure.
I think it’s possible to do so without specifying the mechanism, but that’s not the same thing as saying that no mechanism at all exists. If you are saying that, then you need to explain why all these inputs are correlated with each other, and why our models can (on occasion) correctly predict inputs that have not been observed yet.
Let me set up an analogy. Let’s say you acquire a magically impenetrable box. The box has 10 lights on it, and a big dial-type switch with 10 positions. When you set the switch to position 1, the first light turns on, and the rest of them turn off. When you set it to position 2, the second light turns on, and the rest turn off. When you set it to position 3, the third light turns on, and the rest turn off. These are the only settings you’ve tried so far.
Does it make sense to ask the question, “what will happen when I set the switch to positions 4..10” ? If so, can you make a reasonably confident prediction as to what will happen ? What would your prediction be ?
In the sense that it is always impossible to leave something just unexplained. But the posit of an external reality of some sort is not explatorilly idle, and not, therefore, ruled out by occam’s razor. The posit of an external reality of some sort (it doesn’t need to be specific) explains, at the meta-level, the process of model-formulation, prediction, accuracy, etc.
Fixed that for you.
I suppose shiminux would claim that explanatory or not, it complicates the model and thus makes it more costly, computationally speaking.
But that’s a terrible argument. if you can’t justify a posit by the explanatory work it does, then the optimum number of posits to make is zero.
Which is, in fact, the number of posits shiminux advocates making, is it not? Adapt your models to be more accurate, sure, but don’t expect that to mean anything more than the model working.
Except I think he’s claimed to value things like “the most accurate model not containing slaves” (say) which implies there’s something special about the correct model beyond mere accuracy.
Shminux seems to be positing inputs and models at the least.
I think you quoted the wrong thing there, BTW.
I suppose they are positing inputs, but they’re arguably not positing models as such—merely using them. Or at any rate, that’s how I’d ironman their position.