For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.
This is not an explanation: it is simply saying “evolution did it”. An explanation should exhibit the mechanism whereby the concept is acquired.
It’s more like Hume’s story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?
That is one way of presenting the thought experiment.
Since we are now imagining a creature that has different faculties than an ordinary human
Another way of presenting the thought experiment is to ask how a baby arrives at the concept. Then we are not imagining a creature that has different faculties than an ordinary human.
Another way is to imagine a robot that we are building. How can the robot make causal inferences? Again, “we design it that way” is no more of an answer than “God made us that way” or “evolution made us that way”. Consider the question in the spirit of Jaynes’ use of a robot in presenting probability theory. His robot is concerned with making probabilistic inferences but knows nothing of causes; this robot is concerned with inferring causes. How would we design it that way? Pearl’s works presuppose an existing knowledge of causation, but do not tell us how to first acquire it.
I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? …
That is part of the question. What resources does it need, to proceed from ignorance of causation to knowledge of causation?
I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort.
(2) Reasoning from other concepts, goals, and experience.
I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the fact that causal perception (pdf) and causal agency attributions emerge very early in children; the fact that other mammal species, like rats (pdf), have simple causal concepts related to interventions; and the fact that some forms of causal cognition emerge very, very early even among more distant species, like chickens.
Since causal concepts arise so early in humans and are present in other species, there is current controversy (right in line with the thesis in your OP) as to whether causal concepts are innate. That is one reason why I prefer the Adam thought experiment to babies: it is unclear whether babies already have the causal concepts or have to learn them.
EDIT: Oops, left out a paper and screwed up some formatting. Some day, I really will master markdown language.
The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.
Yes, it’s (2) that I’m interested in. Is there some small set of axioms, on the basis of which you can set up causal reasoning, as has been done for probability theory? And which can then be used as a gold standard against which to measure our untutored fumblings that result from (1)?
This is not an explanation: it is simply saying “evolution did it”. An explanation should exhibit the mechanism whereby the concept is acquired.
That is one way of presenting the thought experiment.
Another way of presenting the thought experiment is to ask how a baby arrives at the concept. Then we are not imagining a creature that has different faculties than an ordinary human.
Another way is to imagine a robot that we are building. How can the robot make causal inferences? Again, “we design it that way” is no more of an answer than “God made us that way” or “evolution made us that way”. Consider the question in the spirit of Jaynes’ use of a robot in presenting probability theory. His robot is concerned with making probabilistic inferences but knows nothing of causes; this robot is concerned with inferring causes. How would we design it that way? Pearl’s works presuppose an existing knowledge of causation, but do not tell us how to first acquire it.
That is part of the question. What resources does it need, to proceed from ignorance of causation to knowledge of causation?
I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.
I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the fact that causal perception (pdf) and causal agency attributions emerge very early in children; the fact that other mammal species, like rats (pdf), have simple causal concepts related to interventions; and the fact that some forms of causal cognition emerge very, very early even among more distant species, like chickens.
Since causal concepts arise so early in humans and are present in other species, there is current controversy (right in line with the thesis in your OP) as to whether causal concepts are innate. That is one reason why I prefer the Adam thought experiment to babies: it is unclear whether babies already have the causal concepts or have to learn them.
EDIT: Oops, left out a paper and screwed up some formatting. Some day, I really will master markdown language.
Yes, it’s (2) that I’m interested in. Is there some small set of axioms, on the basis of which you can set up causal reasoning, as has been done for probability theory? And which can then be used as a gold standard against which to measure our untutored fumblings that result from (1)?