That first one I mentioned is the article Noûs told me to read first at some time or other, the best face that the journal could put forward (in someone’s judgment).
Also, did my links just not load for you? One of them is an article in Noûs in 2005 saying Pearl had the right idea—from what I can see of the article its idea seems incomplete, but anyone who wasn’t committed to modal realism should have seen it as important to discuss. Yet not only the 2010 article, but even the one I linked from Jan 2015 that explicitly claimed to discuss alternate approaches, apparently failed to mention Pearl, or the other 2005 author, or anything that looks like an attempted response to one of them. Why do you think that is?
Because while I could well have been wrong about the reason, it looks to me like the authors are in no way trying to find the best solution. And while scientists no doubt have the same incentives to publish original work, they also have incentives to accept the right answer that appear wholly lacking here—at least (and I no longer know if this is charitable or uncharitable) when the right answer comes from AI theory.
I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can’t speak for the authors you cite, but the questions asked by philosophers are different than, “what is the best answer?” They are more along the lines of, “How do we generate our answers anyways?” and “What might follow?” This may lead to an admittedly harmful lack of urgency in updating beliefs.
Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy’s sake; an error in efficient map design theory may take a generation or two to become immediately apparent.
Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl’s work is equally well-guided in redeeming philosophers. I don’t think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don’t consider each other’s viewpoints, then every article in which philosophers do sufficiently consider each other’s viewpoints is weak evidence of the opposite.
That first one I mentioned is the article Noûs told me to read first at some time or other, the best face that the journal could put forward (in someone’s judgment).
Also, did my links just not load for you? One of them is an article in Noûs in 2005 saying Pearl had the right idea—from what I can see of the article its idea seems incomplete, but anyone who wasn’t committed to modal realism should have seen it as important to discuss. Yet not only the 2010 article, but even the one I linked from Jan 2015 that explicitly claimed to discuss alternate approaches, apparently failed to mention Pearl, or the other 2005 author, or anything that looks like an attempted response to one of them. Why do you think that is?
Because while I could well have been wrong about the reason, it looks to me like the authors are in no way trying to find the best solution. And while scientists no doubt have the same incentives to publish original work, they also have incentives to accept the right answer that appear wholly lacking here—at least (and I no longer know if this is charitable or uncharitable) when the right answer comes from AI theory.
I think you have hit upon the crux of the matter in your last paragraph: the authors are in no way trying to find the best solution. I can’t speak for the authors you cite, but the questions asked by philosophers are different than, “what is the best answer?” They are more along the lines of, “How do we generate our answers anyways?” and “What might follow?” This may lead to an admittedly harmful lack of urgency in updating beliefs.
Because I enjoy making analogies: Science provides the map of the real world; philosophy is the cartography. An error on a map must be corrected immediately for accuracy’s sake; an error in efficient map design theory may take a generation or two to become immediately apparent.
Finally, you use Pearl as the champion of AI theory, but he is equally a champion of philosophy. As misguided as your citations may have been (as philosophers), Pearl’s work is equally well-guided in redeeming philosophers. I don’t think you have sufficiently addressed the cherrypicking charge: if your cited articles are strong evidence that philosophers don’t consider each other’s viewpoints, then every article in which philosophers do sufficiently consider each other’s viewpoints is weak evidence of the opposite.