It seems to me that pop philosophy is being compared to rigorous academic science. Philosophers make great effort to undertand each others’ frameworks. Controversy and disagreement abound, but exercising the mind in predicting consequences using mental models is fundamental to both scientific progress AND everyday life. You and I may disagree on our metaphysical views, but that doesn’t prevent us from exploring the consequences each viewpoint predicts. Eventually, we may be able to test these beliefs. Predicting these consequences in advance helps us use resources effectively (as opposed to testing EVERY possibility scientifically). (Human) philosophy is an important precursor to science.
I’m also glad to see in other comments that the AI case has greater uncertainty than the sleeper cell case.
Having made one counterpoint and mentioned another, let me add that this was a good read and a nice post.
You and I may disagree on our metaphysical views, but that doesn’t prevent us from exploring the consequences each viewpoint predicts.
Right. Which leads to a complementary suggestion to the OP author’s
the immediate next step should be to zoom in on your intuitions – to figure out the source and content of the intuition as much as possible.
In addition, we should zoom out, to find consequences of the intuitions (when combined with known or discoverable facts). That’s where the interesting stuff happens.
Philosophers make great effort to undertand each others’ frameworks.
This amused me, because I somewhat doubt the term “philosophy” would exist without Alexander the Great, and it appears to me that philosophers do not make great effort to understand relevant work they’ve classified as ‘not philosophy’.
I recall the celebrated philosophy journal Noûs recommending an article, possibly this one, which talked a great deal about counterfactuals without once mentioning Judea Pearl, much less recognizing that he seemed to have solved the problems under direct discussion. (Logical uncertainty of course may still be open. I will be shocked if the solution involves talking about logically impossible “possible worlds” rather than algorithms.)
Now on second search, the situation doesn’t seem quite as bad. Someone else mentioned Pearl in the pages of Noûs—before the previous article, yet oddly uncited therein. And I found a more recent work that at least admits probabilities (and Gaifman) exist. But I can see the references, and the list still doesn’t include Pearl or even that 2005 article. Note that the abstract contains an explicit claim to address “other existing solutions to the problem.”
Not to denigrate the towering contributions of Pearl, but potential outcomes (what Pearl calls counterfactuals) were invented by Jerzy Neyman in 1920s, and much of the stuff was worked out by Don Rubin and company in the 1970s (to be fair, they didn’t talk to philosophers very much).
Of course Pearl contributed quite a bit himself, and is probably the best popularizer slash controversy magnet in the field, and has incredible scholarship and reach.
Still, if you are going to shit on academic philosophy for failing scholarship, at least do the scholarship homework yourself.
Also: did you read that paper? Are you sure Pearl et al. solved what they are talking about, or is even relevant? “Counterfactuals” is a big area.
Yes, I just mentioned that. The article I remember reading, which was definitely about Sobel and reverse Sobel sequences, did not contain anything that looked remotely problematic in a Pearlian framework.
As to the rest, you’re talking about giving credit for the answer and I’m talking about getting the right answer.
did not contain anything that looked remotely problematic in a Pearlian framework.
Maybe. Truth value being dependent on orderings for Sobel sequences is (a) weird (b) true according to our intuitions, apparently. So we need to figure out what’s going on (sounds like a problem for logic, really). Maybe you can have a Pearlian account of it—I am not sure myself, and I have not read such an account.
My point is this.
(a) If your complaint is about scholarship because you feel Pearl is relevant, which I base on you saying things like:
which talked a great deal about counterfactuals without once mentioning Judea Pearl
then that’s fine, but please get your own scholarship right.
(b) If your complaint is of the form: “why are you people wasting time, Pearl’s structural equations just immediately solve this,” then can you explain why, or point to a paper? How do you know this? Structural equations are not a panacea for every problem in causation and counterfactual reasoning.
...So, I found it ludicrous to call the reverse Sobel sequence false. I still basically feel that way after reading the article again, though I should say that the author gets around to a better account of the situation on page 12 of 29 (and on close inspection, Moss doesn’t really follow this by heading back to the same zombie-infested cul-de-sac. Not quite.) If the following statements come from different speakers, the second seems socially rude:
(3a) If Sophie had gone to the parade and been stuck behind a tall person, she would not have seen Pedro.
(3b) #But if Sophie had gone to the parade, she would have seen Pedro.
Since the author sometimes seems to treat these as made by one speaker, and the article is nominally about counterfactuals, I assume that both statements come from the same world-model or causal graph. I also assume that each comes from a human being, and not a logician or Omega, and thus we should take each as asserting a high probability. (The article only briefly considers someone saying, “Suppose that Sophie might not see Pedro, but that she will see Pedro,” which I don’t think we could make sense of without reading an explicitly conditional probability into the first half of the premise.) So with two different speakers, I see two credible ways to read (3b):
A. ‘Shut up.’
B. ‘You are altering the wrong node.’ (Because, eg, ‘Had Sophie gone to the parade, she would have looked for a good place to see it.’)
If it’s just B, then politeness would require doing more to rule out A and maybe explain why the speaker doesn’t care about the first counterfactual. Note that reversing their order (that is to say, un-reversing it) would give us what sounds like an invitation to keep talking. As Moss says, the “reply may be a non sequitur, perhaps even a little annoying.” But it doesn’t read as contempt or dismissal.
In the unlikely event that (3) has only one speaker and she isn’t disagreeing with anyone, then we have:
C. ‘The answer you get depends almost entirely on the question you ask. The whole point of counterfactuals is to screen off the causes that would ordinarily determine the (probability of the) node you change. Suppose we compute different answers when we ask, “What would the founding fathers say if they were alive today?” and “What would the founding fathers say if they had lived in our time?” Then the person who chooses the question probably knows the answer they want. Peripherally, I assert that the chance of Sophie getting stuck behind tall people would be too small for me to bother qualifying my statement.’
The main point of C seems obviously true as soon as I think about doing surgery on a causal graph. We can assume both counterfactuals are also true. (3) still seems “infelicitous” because as a rule of thumb it seems like a terrible way to express C, but that depends on the audience. If we rule out C as a reading, then the speaker’s statements could still be true, so I would probably call them “epistemically responsible”, but her goals would make no sense. She would sound like a more philosophical Gollum.
Now if you’re saying that C does not seem immediately clear to you when you think about doing counterfactual surgery, I will withdraw my original criticism. I would have many new criticisms, like the author assuming intuitions I may not share and analyzing common language while barely mentioning society or motive. There’s also what reads to me as mock-formalism, adding nothing to Pearl while distracting from the issues at hand. But the issue would at least warrant discussion, and I’d have to consider whether checking for references was a good test for me apply to the article.
Some people feel that the truth value should reasonably be changed under reversal. I think this might be because humans expect a social convention where info is given in order of relevance. So if (a) comes first, this is info that maybe the parade is crowded-by-default (e.g. preemption is the norm). This makes (b) implausible. If (b) comes first, this is info that we should expect to see people, by default. But this can still be preempted by an exceptional situation.
I don’t think this is even about counterfactuals, specifically, but about lack of commutativity of human utterances (most logics and mathematical languages allow you to commute propositions safely). I can think of all sorts of non-counterfactual versions of this where expectation-of-a-default/preemption create non-commutativity.
There exist cases where people gave “Pearlian accounts” for these types of things (see e.g. Halpern and Pearl’s paper on actual cause). But I think it requires quite a bit more work to make it go through than just “surgeries and causal graphs.” Their paper is quite long and involved.
It feels to me as though you are cherrypicking both evidence and topic. It may very well be that philosophers have a lot of work to do in the important AI field. This does not invalidate the process. Get rid of the term, talk about the process of refining human intelligence through means other than direct observation. The PROCESS, not the results (like the article you cite).
Speaking of that article from Noûs, it was published in 2010. Pearl did lots of work on counterfactuals and uncertainty dating back to 1980, but I would argue that, “The algorithmization of counterfactuals” contains the direct solution you reference. That paper was published in 2011. Unless, of course, you are referring to “Causes and Explanations—a sturctural model approach,” which was published in 2005 in the British Journal for the PHILOSOPHY of Science.
That first one I mentioned is the article Noûs told me to read first at some time or other, the best face that the journal could put forward (in someone’s judgment).
Also, did my links just not load for you? One of them is an article in Noûs in 2005 saying Pearl had the right idea—from what I can see of the article its idea seems incomplete, but anyone who wasn’t committed to modal realism should have seen it as important to discuss. Yet not only the 2010 article, but even the one I linked from Jan 2015 that explicitly claimed to discuss alternate approaches, apparently failed to mention Pearl, or the other 2005 author, or anything that looks like an attempted response to one of them. Why do you think that is?
Because while I could well have been wrong about the reason, it looks to me like the authors are in no way trying to find the best solution. And while scientists no doubt have the same incentives to publish original work, they also have incentives to accept the right answer that appear wholly lacking here—at least (and I no longer know if this is charitable or uncharitable) when the right answer comes from AI theory.
I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can’t speak for the authors you cite, but the questions asked by philosophers are different than, “what is the best answer?” They are more along the lines of, “How do we generate our answers anyways?” and “What might follow?” This may lead to an admittedly harmful lack of urgency in updating beliefs.
Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy’s sake; an error in efficient map design theory may take a generation or two to become immediately apparent.
Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl’s work is equally well-guided in redeeming philosophers. I don’t think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don’t consider each other’s viewpoints, then every article in which philosophers do sufficiently consider each other’s viewpoints is weak evidence of the opposite.
It seems to me that pop philosophy is being compared to rigorous academic science. Philosophers make great effort to undertand each others’ frameworks. Controversy and disagreement abound, but exercising the mind in predicting consequences using mental models is fundamental to both scientific progress AND everyday life. You and I may disagree on our metaphysical views, but that doesn’t prevent us from exploring the consequences each viewpoint predicts. Eventually, we may be able to test these beliefs. Predicting these consequences in advance helps us use resources effectively (as opposed to testing EVERY possibility scientifically). (Human) philosophy is an important precursor to science.
I’m also glad to see in other comments that the AI case has greater uncertainty than the sleeper cell case.
Having made one counterpoint and mentioned another, let me add that this was a good read and a nice post.
Right. Which leads to a complementary suggestion to the OP author’s
In addition, we should zoom out, to find consequences of the intuitions (when combined with known or discoverable facts). That’s where the interesting stuff happens.
This amused me, because I somewhat doubt the term “philosophy” would exist without Alexander the Great, and it appears to me that philosophers do not make great effort to understand relevant work they’ve classified as ‘not philosophy’.
I recall the celebrated philosophy journal Noûs recommending an article, possibly this one, which talked a great deal about counterfactuals without once mentioning Judea Pearl, much less recognizing that he seemed to have solved the problems under direct discussion. (Logical uncertainty of course may still be open. I will be shocked if the solution involves talking about logically impossible “possible worlds” rather than algorithms.)
Now on second search, the situation doesn’t seem quite as bad. Someone else mentioned Pearl in the pages of Noûs—before the previous article, yet oddly uncited therein. And I found a more recent work that at least admits probabilities (and Gaifman) exist. But I can see the references, and the list still doesn’t include Pearl or even that 2005 article. Note that the abstract contains an explicit claim to address “other existing solutions to the problem.”
Not to denigrate the towering contributions of Pearl, but potential outcomes (what Pearl calls counterfactuals) were invented by Jerzy Neyman in 1920s, and much of the stuff was worked out by Don Rubin and company in the 1970s (to be fair, they didn’t talk to philosophers very much).
Of course Pearl contributed quite a bit himself, and is probably the best popularizer slash controversy magnet in the field, and has incredible scholarship and reach.
Still, if you are going to shit on academic philosophy for failing scholarship, at least do the scholarship homework yourself.
Also: did you read that paper? Are you sure Pearl et al. solved what they are talking about, or is even relevant? “Counterfactuals” is a big area.
Yes, I just mentioned that. The article I remember reading, which was definitely about Sobel and reverse Sobel sequences, did not contain anything that looked remotely problematic in a Pearlian framework.
As to the rest, you’re talking about giving credit for the answer and I’m talking about getting the right answer.
Maybe. Truth value being dependent on orderings for Sobel sequences is (a) weird (b) true according to our intuitions, apparently. So we need to figure out what’s going on (sounds like a problem for logic, really). Maybe you can have a Pearlian account of it—I am not sure myself, and I have not read such an account.
My point is this.
(a) If your complaint is about scholarship because you feel Pearl is relevant, which I base on you saying things like:
then that’s fine, but please get your own scholarship right.
(b) If your complaint is of the form: “why are you people wasting time, Pearl’s structural equations just immediately solve this,” then can you explain why, or point to a paper? How do you know this? Structural equations are not a panacea for every problem in causation and counterfactual reasoning.
...So, I found it ludicrous to call the reverse Sobel sequence false. I still basically feel that way after reading the article again, though I should say that the author gets around to a better account of the situation on page 12 of 29 (and on close inspection, Moss doesn’t really follow this by heading back to the same zombie-infested cul-de-sac. Not quite.) If the following statements come from different speakers, the second seems socially rude:
Since the author sometimes seems to treat these as made by one speaker, and the article is nominally about counterfactuals, I assume that both statements come from the same world-model or causal graph. I also assume that each comes from a human being, and not a logician or Omega, and thus we should take each as asserting a high probability. (The article only briefly considers someone saying, “Suppose that Sophie might not see Pedro, but that she will see Pedro,” which I don’t think we could make sense of without reading an explicitly conditional probability into the first half of the premise.) So with two different speakers, I see two credible ways to read (3b):
A. ‘Shut up.’
B. ‘You are altering the wrong node.’ (Because, eg, ‘Had Sophie gone to the parade, she would have looked for a good place to see it.’)
If it’s just B, then politeness would require doing more to rule out A and maybe explain why the speaker doesn’t care about the first counterfactual. Note that reversing their order (that is to say, un-reversing it) would give us what sounds like an invitation to keep talking. As Moss says, the “reply may be a non sequitur, perhaps even a little annoying.” But it doesn’t read as contempt or dismissal.
In the unlikely event that (3) has only one speaker and she isn’t disagreeing with anyone, then we have:
C. ‘The answer you get depends almost entirely on the question you ask. The whole point of counterfactuals is to screen off the causes that would ordinarily determine the (probability of the) node you change. Suppose we compute different answers when we ask, “What would the founding fathers say if they were alive today?” and “What would the founding fathers say if they had lived in our time?” Then the person who chooses the question probably knows the answer they want. Peripherally, I assert that the chance of Sophie getting stuck behind tall people would be too small for me to bother qualifying my statement.’
The main point of C seems obviously true as soon as I think about doing surgery on a causal graph. We can assume both counterfactuals are also true. (3) still seems “infelicitous” because as a rule of thumb it seems like a terrible way to express C, but that depends on the audience. If we rule out C as a reading, then the speaker’s statements could still be true, so I would probably call them “epistemically responsible”, but her goals would make no sense. She would sound like a more philosophical Gollum.
Now if you’re saying that C does not seem immediately clear to you when you think about doing counterfactual surgery, I will withdraw my original criticism. I would have many new criticisms, like the author assuming intuitions I may not share and analyzing common language while barely mentioning society or motive. There’s also what reads to me as mock-formalism, adding nothing to Pearl while distracting from the issues at hand. But the issue would at least warrant discussion, and I’d have to consider whether checking for references was a good test for me apply to the article.
(having given this a bit of thought):
Some people feel that the truth value should reasonably be changed under reversal. I think this might be because humans expect a social convention where info is given in order of relevance. So if (a) comes first, this is info that maybe the parade is crowded-by-default (e.g. preemption is the norm). This makes (b) implausible. If (b) comes first, this is info that we should expect to see people, by default. But this can still be preempted by an exceptional situation.
I don’t think this is even about counterfactuals, specifically, but about lack of commutativity of human utterances (most logics and mathematical languages allow you to commute propositions safely). I can think of all sorts of non-counterfactual versions of this where expectation-of-a-default/preemption create non-commutativity.
There exist cases where people gave “Pearlian accounts” for these types of things (see e.g. Halpern and Pearl’s paper on actual cause). But I think it requires quite a bit more work to make it go through than just “surgeries and causal graphs.” Their paper is quite long and involved.
It feels to me as though you are cherrypicking both evidence and topic. It may very well be that philosophers have a lot of work to do in the important AI field. This does not invalidate the process. Get rid of the term, talk about the process of refining human intelligence through means other than direct observation. The PROCESS, not the results (like the article you cite).
Speaking of that article from Noûs, it was published in 2010. Pearl did lots of work on counterfactuals and uncertainty dating back to 1980, but I would argue that, “The algorithmization of counterfactuals” contains the direct solution you reference. That paper was published in 2011. Unless, of course, you are referring to “Causes and Explanations—a sturctural model approach,” which was published in 2005 in the British Journal for the PHILOSOPHY of Science.
Mind you, the 2015 article may have at least recognized (in a way that seems only mildly unhelpful) that we could ask more than one question.
That first one I mentioned is the article Noûs told me to read first at some time or other, the best face that the journal could put forward (in someone’s judgment).
Also, did my links just not load for you? One of them is an article in Noûs in 2005 saying Pearl had the right idea—from what I can see of the article its idea seems incomplete, but anyone who wasn’t committed to modal realism should have seen it as important to discuss. Yet not only the 2010 article, but even the one I linked from Jan 2015 that explicitly claimed to discuss alternate approaches, apparently failed to mention Pearl, or the other 2005 author, or anything that looks like an attempted response to one of them. Why do you think that is?
Because while I could well have been wrong about the reason, it looks to me like the authors are in no way trying to find the best solution. And while scientists no doubt have the same incentives to publish original work, they also have incentives to accept the right answer that appear wholly lacking here—at least (and I no longer know if this is charitable or uncharitable) when the right answer comes from AI theory.
I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can’t speak for the authors you cite, but the questions asked by philosophers are different than, “what is the best answer?” They are more along the lines of, “How do we generate our answers anyways?” and “What might follow?” This may lead to an admittedly harmful lack of urgency in updating beliefs.
Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy’s sake; an error in efficient map design theory may take a generation or two to become immediately apparent.
Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl’s work is equally well-guided in redeeming philosophers. I don’t think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don’t consider each other’s viewpoints, then every article in which philosophers do sufficiently consider each other’s viewpoints is weak evidence of the opposite.
Note that what seens to be a lsolution to a less philosophically aware person, might mot seem that way to a more philosophically aware one.