We don’t expect the righthand gear to, say, turn green and explode.
Ah, OK, that I would call context. Context is important. If something that looks like a gear sticks from one side of a box that an alien ship dropped off and there is another looks-like-a-gear thing on the other side, my expectations are that it might well turn green and explode. On the other hand, if we are looking at a Victorian cast-iron contraption, turning green is way down on my list of possibilities.
Context, basically, provides boundaries for the hypotheses that we are willing to consider. Sometimes we take a too narrow view and nothing fits inside the context boundaries—then widening of the context (sometimes explosively) is in order. But some context is necessary, otherwise you’d be utterly lost.
But something really important changes when we see the inside.
Well, you got some evidence that you have a strong tendency to believe (though I think there were some quite discouraging psych experiments about the degree to which people are willing to believe the social consensus over their own lying eyes). And yes, there is a pretty major difference between hearsay and personal experience. But still, I’m not sure where is the boundary that you wish to draw—see stage magic, optical illusions, convincing conmen, and general trickery.
there’s something about the nature of the model that changes when you look inside the box.
There is a traditional division of models into explanatory models and forecasting models. The point of a forecasting model is to provide a forecast—and that’s how it is judged. If it provides good forecasts, it might well be a black box and that’s not important. But for explanatory models being a black box is forbidden. The point of an explanatory model is to provide insight and, potentially, show what possible interventions could achieve.
Is that something related to your change of perspective as you open the box?
the word “causal”. I don’t really know what that means
There is a fair amount of literature on it—see e.g. Pearl—but, basically, a causal model makes stronger claims then, say, a correlational model. A correlational model would say things like “any time you see X you should expect to see Y”—and it might well be a very robust and well supported by evidence claim. A causal model, on the other hand, would say that X causes Y and that, specifically, changing X (an “intervention”) would lead to an appropriate change in Y. A correlational model does not make such a claim.
Interpreting correlational models as causal is a very common mistake.
In what sense does this causal model not “count”?
You test causal models by interventions—does manipulating X lead to the changes you expect in Y? If you are limited to passive observation, establishing causal models is… difficult.
to immediately get the intuition that there’s something wrong with the type of justification of “because the teacher said so”
Isn’t that just the hearsay vs personal experience difference?
The claim is that if a Gears-like model makes a prediction and the prediction is falsified, then you can deduce something else from the falsification.
Hmmm. OK, let me try to get at it from another side. Let’s say that Gearness is the property of being tied into the wider understanding of how the world works.
Generally speaking, you have an interconnected network of various models of how the world is constructed. Some are implied by others, some are explicitly dependent on others, etc. This network is vaguely tree-like in the sense that some models are closer to the roots and changes in them have wide-ranging repercussions (e.g. a religious (de)conversion) and some models are leaves and changes in them affect little if anything else (e.g. learning that whales on dying usually sink to the ocean floor).
Gearness would then be the degree to which a model is implied and constrained by “surrounding” knowledge. Does that make any sense?
Then the second test would be basically about the implications of a particular model / result for the surrounding knowledge. Is it deeply enmeshed or does it stand by itself? And the third test is about the same thing as well—how well does the model fit into the overall picture.
Ah, OK, that I would call context. Context is important. If something that looks like a gear sticks from one side of a box that an alien ship dropped off and there is another looks-like-a-gear thing on the other side, my expectations are that it might well turn green and explode. On the other hand, if we are looking at a Victorian cast-iron contraption, turning green is way down on my list of possibilities.
Context, basically, provides boundaries for the hypotheses that we are willing to consider. Sometimes we take a too narrow view and nothing fits inside the context boundaries—then widening of the context (sometimes explosively) is in order. But some context is necessary, otherwise you’d be utterly lost.
Well, you got some evidence that you have a strong tendency to believe (though I think there were some quite discouraging psych experiments about the degree to which people are willing to believe the social consensus over their own lying eyes). And yes, there is a pretty major difference between hearsay and personal experience. But still, I’m not sure where is the boundary that you wish to draw—see stage magic, optical illusions, convincing conmen, and general trickery.
There is a traditional division of models into explanatory models and forecasting models. The point of a forecasting model is to provide a forecast—and that’s how it is judged. If it provides good forecasts, it might well be a black box and that’s not important. But for explanatory models being a black box is forbidden. The point of an explanatory model is to provide insight and, potentially, show what possible interventions could achieve.
Is that something related to your change of perspective as you open the box?
There is a fair amount of literature on it—see e.g. Pearl—but, basically, a causal model makes stronger claims then, say, a correlational model. A correlational model would say things like “any time you see X you should expect to see Y”—and it might well be a very robust and well supported by evidence claim. A causal model, on the other hand, would say that X causes Y and that, specifically, changing X (an “intervention”) would lead to an appropriate change in Y. A correlational model does not make such a claim.
Interpreting correlational models as causal is a very common mistake.
You test causal models by interventions—does manipulating X lead to the changes you expect in Y? If you are limited to passive observation, establishing causal models is… difficult.
Isn’t that just the hearsay vs personal experience difference?
Hmmm. OK, let me try to get at it from another side. Let’s say that Gearness is the property of being tied into the wider understanding of how the world works.
Generally speaking, you have an interconnected network of various models of how the world is constructed. Some are implied by others, some are explicitly dependent on others, etc. This network is vaguely tree-like in the sense that some models are closer to the roots and changes in them have wide-ranging repercussions (e.g. a religious (de)conversion) and some models are leaves and changes in them affect little if anything else (e.g. learning that whales on dying usually sink to the ocean floor).
Gearness would then be the degree to which a model is implied and constrained by “surrounding” knowledge. Does that make any sense?
Then the second test would be basically about the implications of a particular model / result for the surrounding knowledge. Is it deeply enmeshed or does it stand by itself? And the third test is about the same thing as well—how well does the model fit into the overall picture.