Agreed. Assigning causality requires having made a choice about how to carve up the world into categories so one part of the world can affect another. Without having made this choice we lose our normal notion of causality because there are no things to cause other things, hence causality as normally formulated only makes sense within an ontology.
And yet, there is some underlying physical process which drives our ability to model the world with the idea that things cause other things and we might reasonably point to it and say it is the real causality, i.e. the aspect of existence that we perceive as change.
And yet, there is some underlying physical process which drives our ability to model the world with the idea that things cause other things and we might reasonably point to it and say it is the real causality, i.e. the aspect of existence that we perceive as change.
Hmm. Imagine the world as fully deterministic. Then there is no “real causality” to speak of, everything is set in stone, and there is no difference between cause and effect. The “underlying physical process which drives our ability to model the world with the idea that things cause other things” are essential in being an embedded agent, since agency equals a perceived world optimization, which requires, in turn, predictability (from the inside the world), but I don’t think anyone has a good handle on what “predictability from inside the world” may look like. Off hand, it means that there is a subset of the world that runs a coarse-grained simulation of the world, but how do you recognize such a simulation without already knowing what you are looking for? Anyway, this is a bit of a tangent.
Imagine the world as fully deterministic. Then there is no “real causality” to speak of, everything is set in stone, and there is no difference between cause and effect.
If causation is understood in terms of counterfactuals — X would have happened if Y had happened — then there is still a difference between cause and effect. A model of a world implies models of hypothetical, counterfactual worlds.
If causation is understood in terms of counterfactuals — X would have happened if Y had happened — then there is still a difference between cause and effect. A model of a world implies models of hypothetical, counterfactual worlds.
Yes, indeed, in terms of counterfactuals there is. But counterfactuals are in the map (well, to be fair a map is a tiny part of the territory in the agent’s brain). Which was my original point: causality is in the map.
The map and the territory are not separate magisteria. A good map, or model, fits the territory: it allows one to make accurate and reliable predictions. That is what it is, for a map to be a good one. The things in the map have their counterparts in the world. The goodness of fit of a map to the world is a fact about the world. Causation is there also, just as much as pianos, and gravitation, and quarks.
A good map, or model, fits the territory: it allows one to make accurate and reliable predictions. That is what it is, for a map to be a good one.
is not obviously equivalent to this claim:-
The things in the map have their counterparts in the world.
Causation is there also, just as much as pianos, and gravitation, and quarks.
If you accept usefulness in the map as the sole criterion for existence in the territory, then causation is there, along with much else, including much that you do not believe in ,and much that is mutually contradictory.
Hmm. Imagine the world as fully deterministic. Then there is no “real causality” to speak of, everything is set in stone, and there is no difference between cause and effect
There’s a difference between strict causal determinism and block universe theory. Under causal determinism, future events have not happened yet,and need to be caused, even though there is is only one way they can turn out. Whereas under the block universe theory , the future is already “there”—ontologically fixed as well as epistemologically fixed.
Which is the correct theory—the first paragraph or the second?
There is plenty of evidence that human notions of causality are influenced by human concerns, but it doesn’t add up to the conclusion that there is no causality in the territory. The comparison with ontology is apt: just because tables and chairs are human level ontology, doesn’t mean that there’s no quark level ontology to the universe.
What would it even mean to say a theory of causality is “correct” here? We’re talking about what makes sense to apply the term causality to, and there’s matter of correctness at that level, only of usefulness to some purpose. It’s only after we have some systematized way of framing a question that we can ask if something is correct within that system.
Correctness as opposed to usefulness would be correspondence to reality.
There’s a general problem of how to establish correspondence, a problem which applies to many things other than causality. You can’t infer that something corresponds just because it is useful, but you also can’t infer that something does not correspond just because it is useful—“in the map” does not imply “not in the territory”.
Agreed. Assigning causality requires having made a choice about how to carve up the world into categories so one part of the world can affect another. Without having made this choice we lose our normal notion of causality because there are no things to cause other things, hence causality as normally formulated only makes sense within an ontology.
And yet, there is some underlying physical process which drives our ability to model the world with the idea that things cause other things and we might reasonably point to it and say it is the real causality, i.e. the aspect of existence that we perceive as change.
Hmm. Imagine the world as fully deterministic. Then there is no “real causality” to speak of, everything is set in stone, and there is no difference between cause and effect. The “underlying physical process which drives our ability to model the world with the idea that things cause other things” are essential in being an embedded agent, since agency equals a perceived world optimization, which requires, in turn, predictability (from the inside the world), but I don’t think anyone has a good handle on what “predictability from inside the world” may look like. Off hand, it means that there is a subset of the world that runs a coarse-grained simulation of the world, but how do you recognize such a simulation without already knowing what you are looking for? Anyway, this is a bit of a tangent.
If causation is understood in terms of counterfactuals — X would have happened if Y had happened — then there is still a difference between cause and effect. A model of a world implies models of hypothetical, counterfactual worlds.
Yes, indeed, in terms of counterfactuals there is. But counterfactuals are in the map (well, to be fair a map is a tiny part of the territory in the agent’s brain). Which was my original point: causality is in the map.
The map and the territory are not separate magisteria. A good map, or model, fits the territory: it allows one to make accurate and reliable predictions. That is what it is, for a map to be a good one. The things in the map have their counterparts in the world. The goodness of fit of a map to the world is a fact about the world. Causation is there also, just as much as pianos, and gravitation, and quarks.
This claim..
is not obviously equivalent to this claim:-
If you accept usefulness in the map as the sole criterion for existence in the territory, then causation is there, along with much else, including much that you do not believe in ,and much that is mutually contradictory.
There’s a difference between strict causal determinism and block universe theory. Under causal determinism, future events have not happened yet,and need to be caused, even though there is is only one way they can turn out. Whereas under the block universe theory , the future is already “there”—ontologically fixed as well as epistemologically fixed.
Which is the correct theory—the first paragraph or the second?
There is plenty of evidence that human notions of causality are influenced by human concerns, but it doesn’t add up to the conclusion that there is no causality in the territory. The comparison with ontology is apt: just because tables and chairs are human level ontology, doesn’t mean that there’s no quark level ontology to the universe.
What would it even mean to say a theory of causality is “correct” here? We’re talking about what makes sense to apply the term causality to, and there’s matter of correctness at that level, only of usefulness to some purpose. It’s only after we have some systematized way of framing a question that we can ask if something is correct within that system.
Correctness as opposed to usefulness would be correspondence to reality.
There’s a general problem of how to establish correspondence, a problem which applies to many things other than causality. You can’t infer that something corresponds just because it is useful, but you also can’t infer that something does not correspond just because it is useful—“in the map” does not imply “not in the territory”.