To explain, e.g. to describe “why” something happened, is to talk about causes and effects.
I would still say that cause and effect is a subset of the kind of models that are used in statistics. A case in point is for example Bayesian networks, that can accomodate both probabilistc and causal relations. I’m aware that Judea Pearl and probably others reverse the picture, and think that C&E are the real relations, which are only approximated in our mind as probabilistic relations. On that, I would say that quantum mechanics seems to point out that there is something fundamentally undetermined about our relations with cause and effect. Also, causal relations are very useful in physics, but one may want to use other models where physics is not especially relevant. From what one may call “instrumentalist” point of view, time is a dimension so universal that any model can compress information by incorporating it, but it is not necessarily so, as relativity shows us: indeed, general relativity shows us you can compress a lot of information by not explicitly talking about time, and thus by sidestepping clean causal relations (what is cause in a reference frame is effect in another).
Prediction and explanation are very very different.
I’m not aware of a theory or a model that uses vastly different entities to explain and to predict. The typical case of a physical law posits an ontology governed by a stable relation, thus using the precise same pieces to explain the past and predict the future.
Besides, such a model would be very difficult to tune: any set of data can be partitioned in any way you like between training and test, and it seems odd that a model is so dependent from the experimenter’s intent.
I would still say that cause and effect is a subset of the kind of models that are used in statistics.
You would be wrong, then. The subset relation is the other way around. Bayesian networks are not causal models, they are statistical independence models.
Compressing information has nothing to do with causality. No experimental scientist talks about causality like that, in any field. There is a big literature on something called “compressed sensing,” for example, but that literature (correctly) does not generally make claims about causality.
I’m not aware of a theory or a model that uses vastly different entities to explain and to predict.
I am.
You can’t tune (e.g. trade off bias/variance properly) causal models in any kind of straightforward way, because the parameter of interest is never unobserved, unlike standard regression models. Causal inference is a type of unsupervised problem, unless you have experimental data.
Rather than arguing with me about this, I suggest a more productive use of your time would be to just read some stuff on causal inference. You are implicitly smuggling in some definition you like that nobody uses.
I would still say that cause and effect is a subset of the kind of models that are used in statistics. A case in point is for example Bayesian networks, that can accomodate both probabilistc and causal relations.
I’m aware that Judea Pearl and probably others reverse the picture, and think that C&E are the real relations, which are only approximated in our mind as probabilistic relations. On that, I would say that quantum mechanics seems to point out that there is something fundamentally undetermined about our relations with cause and effect. Also, causal relations are very useful in physics, but one may want to use other models where physics is not especially relevant.
From what one may call “instrumentalist” point of view, time is a dimension so universal that any model can compress information by incorporating it, but it is not necessarily so, as relativity shows us: indeed, general relativity shows us you can compress a lot of information by not explicitly talking about time, and thus by sidestepping clean causal relations (what is cause in a reference frame is effect in another).
I’m not aware of a theory or a model that uses vastly different entities to explain and to predict. The typical case of a physical law posits an ontology governed by a stable relation, thus using the precise same pieces to explain the past and predict the future. Besides, such a model would be very difficult to tune: any set of data can be partitioned in any way you like between training and test, and it seems odd that a model is so dependent from the experimenter’s intent.
You would be wrong, then. The subset relation is the other way around. Bayesian networks are not causal models, they are statistical independence models.
Compressing information has nothing to do with causality. No experimental scientist talks about causality like that, in any field. There is a big literature on something called “compressed sensing,” for example, but that literature (correctly) does not generally make claims about causality.
I am.
You can’t tune (e.g. trade off bias/variance properly) causal models in any kind of straightforward way, because the parameter of interest is never unobserved, unlike standard regression models. Causal inference is a type of unsupervised problem, unless you have experimental data.
Rather than arguing with me about this, I suggest a more productive use of your time would be to just read some stuff on causal inference. You are implicitly smuggling in some definition you like that nobody uses.