There was no controversy about Wiles’ proof of FLT
It’s the Appel-Haken 4CT proof I was actually thinking about, my bad. There was controversy about that not being a “proper” proof, as I recall, and it’s been (unfavorably) compared to Wiles’ proof in that respect (which helped me mix up the two—I’m no mathematician!).
My underlying question is “what counts as a controversy”, and more directly “how would I go about checking the facts of your claim about the correlations between a field’s distance to objective truth and proneness to controversy”.
My underlying question is “what counts as a controversy”
“A state of prolonged public dispute or debate.” How prolonged? How much disputed? Look at the various disciplines I listed and see how they compare. Agreed, for mathematics, Appel-Haken was a controversy. Compared with politics, it was animated conversation over afternoon tea at the vicarage. Also, judging from the Wikipedia account, the controversy progressed steadily to a resolution.
how would I go about checking the facts of your claim
If you want numbers and experiments, obviously I haven’t done any of that, but just recounted what it seems to me that I have seen. You, or someone, would have to work out an objective measure of the existence and intensity of a controversy, and survey publications in various disciplines. I don’t know if you could devise a method of detecting controversies just from citation patterns, but the more you could automate this the easier it would be to collect data.
Richard, do you think Pearlian causality is mathematics or something else? Because I think Pearlian causality is extremely controversial by your definition (to be fair not as a piece of math, but how applicable it is to practical scientific problems).
Richard, do you think Pearlian causality is mathematics or something else?
It’s applied mathematics.
That is, you can erect the entire thing as pure mathematics, as you can with, say, probability and statistics, or rational mechanics. The motivation is to apply it to the real world, and the language may sound like it’s talking about the real world, but that’s just a way of thinking about the pure mathematics. Then to apply it to the real world, you need to step beyond mathematics and say what real-world phenomena you are going to map the mathematical concepts to.
Pearl is insistent that the concept of causality is primitive and not reducible to statistics, but I haven’t ever read him philosophising about “what causes really are”. He just takes them as primitive and understood: do(X=x) means “set the value of X to x”, although that is clearly an unsafe instruction to give an AGI. You would have to at least amplify it with something like “without having any influence on any other variable except via the causal arrows you are willing to allow might exist”.
There appears to be some dispute on this issue. I’d be interested to know his answer to the conundrum posed by Scott Aaronson, or for that matter the similar one I posed here. (I am not satisfied by any of the answers in either place.)
When you ask (in your koan) how the process of attributing causation gets started, what exactly are you asking about? Are you asking how humans actually came by their tendency to attribute causation? Are you asking how an AI might do so? Are you asking about how causal attributions are ultimately justified? Or what?
I think these are all aspects of the same thing: how might an intelligent entity arrive at correct knowledge about causes, starting from a lack of even the concept of a cause?
That seems like a very different question than, say, how humans actually came by their tendency to attribute causation. For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.
What you seem to be after is very different. It’s more like Hume’s story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?
Since we are now imagining a creature that has different faculties than an ordinary human (or at least, that seems likely, given how automatic causal perception in launching cases is and how humans seem to think about their own agency), I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? …
For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.
This is not an explanation: it is simply saying “evolution did it”. An explanation should exhibit the mechanism whereby the concept is acquired.
It’s more like Hume’s story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?
That is one way of presenting the thought experiment.
Since we are now imagining a creature that has different faculties than an ordinary human
Another way of presenting the thought experiment is to ask how a baby arrives at the concept. Then we are not imagining a creature that has different faculties than an ordinary human.
Another way is to imagine a robot that we are building. How can the robot make causal inferences? Again, “we design it that way” is no more of an answer than “God made us that way” or “evolution made us that way”. Consider the question in the spirit of Jaynes’ use of a robot in presenting probability theory. His robot is concerned with making probabilistic inferences but knows nothing of causes; this robot is concerned with inferring causes. How would we design it that way? Pearl’s works presuppose an existing knowledge of causation, but do not tell us how to first acquire it.
I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? …
That is part of the question. What resources does it need, to proceed from ignorance of causation to knowledge of causation?
I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort.
(2) Reasoning from other concepts, goals, and experience.
I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the fact that causal perception (pdf) and causal agency attributions emerge very early in children; the fact that other mammal species, like rats (pdf), have simple causal concepts related to interventions; and the fact that some forms of causal cognition emerge very, very early even among more distant species, like chickens.
Since causal concepts arise so early in humans and are present in other species, there is current controversy (right in line with the thesis in your OP) as to whether causal concepts are innate. That is one reason why I prefer the Adam thought experiment to babies: it is unclear whether babies already have the causal concepts or have to learn them.
EDIT: Oops, left out a paper and screwed up some formatting. Some day, I really will master markdown language.
The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.
Yes, it’s (2) that I’m interested in. Is there some small set of axioms, on the basis of which you can set up causal reasoning, as has been done for probability theory? And which can then be used as a gold standard against which to measure our untutored fumblings that result from (1)?
You might be interested in Bruno Latour’s work on mapping controversies. The idea is to look at instances of “controversy” with eyes fully open rather than half-closed. There’s one on string theory, for instance.
It’s the Appel-Haken 4CT proof I was actually thinking about, my bad. There was controversy about that not being a “proper” proof, as I recall, and it’s been (unfavorably) compared to Wiles’ proof in that respect (which helped me mix up the two—I’m no mathematician!).
My underlying question is “what counts as a controversy”, and more directly “how would I go about checking the facts of your claim about the correlations between a field’s distance to objective truth and proneness to controversy”.
“A state of prolonged public dispute or debate.” How prolonged? How much disputed? Look at the various disciplines I listed and see how they compare. Agreed, for mathematics, Appel-Haken was a controversy. Compared with politics, it was animated conversation over afternoon tea at the vicarage. Also, judging from the Wikipedia account, the controversy progressed steadily to a resolution.
If you want numbers and experiments, obviously I haven’t done any of that, but just recounted what it seems to me that I have seen. You, or someone, would have to work out an objective measure of the existence and intensity of a controversy, and survey publications in various disciplines. I don’t know if you could devise a method of detecting controversies just from citation patterns, but the more you could automate this the easier it would be to collect data.
Richard, do you think Pearlian causality is mathematics or something else? Because I think Pearlian causality is extremely controversial by your definition (to be fair not as a piece of math, but how applicable it is to practical scientific problems).
It’s applied mathematics.
That is, you can erect the entire thing as pure mathematics, as you can with, say, probability and statistics, or rational mechanics. The motivation is to apply it to the real world, and the language may sound like it’s talking about the real world, but that’s just a way of thinking about the pure mathematics. Then to apply it to the real world, you need to step beyond mathematics and say what real-world phenomena you are going to map the mathematical concepts to.
Pearl is insistent that the concept of causality is primitive and not reducible to statistics, but I haven’t ever read him philosophising about “what causes really are”. He just takes them as primitive and understood: do(X=x) means “set the value of X to x”, although that is clearly an unsafe instruction to give an AGI. You would have to at least amplify it with something like “without having any influence on any other variable except via the causal arrows you are willing to allow might exist”.
There appears to be some dispute on this issue. I’d be interested to know his answer to the conundrum posed by Scott Aaronson, or for that matter the similar one I posed here. (I am not satisfied by any of the answers in either place.)
When you ask (in your koan) how the process of attributing causation gets started, what exactly are you asking about? Are you asking how humans actually came by their tendency to attribute causation? Are you asking how an AI might do so? Are you asking about how causal attributions are ultimately justified? Or what?
I think these are all aspects of the same thing: how might an intelligent entity arrive at correct knowledge about causes, starting from a lack of even the concept of a cause?
That seems like a very different question than, say, how humans actually came by their tendency to attribute causation. For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.
What you seem to be after is very different. It’s more like Hume’s story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?
Since we are now imagining a creature that has different faculties than an ordinary human (or at least, that seems likely, given how automatic causal perception in launching cases is and how humans seem to think about their own agency), I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? …
This is not an explanation: it is simply saying “evolution did it”. An explanation should exhibit the mechanism whereby the concept is acquired.
That is one way of presenting the thought experiment.
Another way of presenting the thought experiment is to ask how a baby arrives at the concept. Then we are not imagining a creature that has different faculties than an ordinary human.
Another way is to imagine a robot that we are building. How can the robot make causal inferences? Again, “we design it that way” is no more of an answer than “God made us that way” or “evolution made us that way”. Consider the question in the spirit of Jaynes’ use of a robot in presenting probability theory. His robot is concerned with making probabilistic inferences but knows nothing of causes; this robot is concerned with inferring causes. How would we design it that way? Pearl’s works presuppose an existing knowledge of causation, but do not tell us how to first acquire it.
That is part of the question. What resources does it need, to proceed from ignorance of causation to knowledge of causation?
I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.
I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the fact that causal perception (pdf) and causal agency attributions emerge very early in children; the fact that other mammal species, like rats (pdf), have simple causal concepts related to interventions; and the fact that some forms of causal cognition emerge very, very early even among more distant species, like chickens.
Since causal concepts arise so early in humans and are present in other species, there is current controversy (right in line with the thesis in your OP) as to whether causal concepts are innate. That is one reason why I prefer the Adam thought experiment to babies: it is unclear whether babies already have the causal concepts or have to learn them.
EDIT: Oops, left out a paper and screwed up some formatting. Some day, I really will master markdown language.
Yes, it’s (2) that I’m interested in. Is there some small set of axioms, on the basis of which you can set up causal reasoning, as has been done for probability theory? And which can then be used as a gold standard against which to measure our untutored fumblings that result from (1)?
Ah; not a true controversy then?
You might be interested in Bruno Latour’s work on mapping controversies. The idea is to look at instances of “controversy” with eyes fully open rather than half-closed. There’s one on string theory, for instance.
Just not much of a controversy.
So it’s being done already! Any results yet? mappingcontroversies.net appears to all be research in progress.