I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory.
Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science.
So, we have a spectacular mis-estimation of the time frame—claiming 33 years ago that AI would be seen as important “within a few years”. That is off by one order of magnitude (and still counting!) Do we blame his confusion on the fact that he is a philosopher, or was the over-optimism a symptom of his activity as an AI researcher? :)
ETA:
as irresponsible as giving a degree course in physics which includes no quantum theory.
I’m not sure I like the analogy. QM is foundational for physics, while AI merely shares some (as yet unknown) foundation with all those mind-oriented branches of philosophy. A better analogy might be “giving a degree course in biology which includes no exobiology”.
Hmmm. I’m reasonably confident that biology degree programs will not include more than a paragraph on exobiology until we have an actual example of exobiology to talk about. So what is the argument for doing otherwise with regard to AI in philosophy?
Oh, yeah. I remember. Philosophers, unlike biologists, have never shied away from investigating things that are not known to exist.
So, we have a spectacular mis-estimation of the time frame—claiming 33 years ago that AI would be seen as important “within a few years”.
He didn’t necessarily predict that AI would be seen as important in that timeframe; what he said was that if it wasn’t, philosophers would have to be incompetent and their teaching irresponsible.
Where did Sloman claim that AI would be seen as important within a few years?
I inferred that he would characterize it as important in that time frame from:
… within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence …
together with a (perhaps unjustified) assumption that philosophers refrain from calling their colleagues “professionally incompetent” unless the stakes are important. And that they generally do what is fair.
Philosophy quote of the day:
Aaron Sloman (1978)
According to the link:
So, we have a spectacular mis-estimation of the time frame—claiming 33 years ago that AI would be seen as important “within a few years”. That is off by one order of magnitude (and still counting!) Do we blame his confusion on the fact that he is a philosopher, or was the over-optimism a symptom of his activity as an AI researcher? :)
ETA:
I’m not sure I like the analogy. QM is foundational for physics, while AI merely shares some (as yet unknown) foundation with all those mind-oriented branches of philosophy. A better analogy might be “giving a degree course in biology which includes no exobiology”.
Hmmm. I’m reasonably confident that biology degree programs will not include more than a paragraph on exobiology until we have an actual example of exobiology to talk about. So what is the argument for doing otherwise with regard to AI in philosophy?
Oh, yeah. I remember. Philosophers, unlike biologists, have never shied away from investigating things that are not known to exist.
He didn’t necessarily predict that AI would be seen as important in that timeframe; what he said was that if it wasn’t, philosophers would have to be incompetent and their teaching irresponsible.
Full marks… but let’s be honest, he doesn’t get too many difficulty points for making that prediction...
I didn’t read the whole article. Where did Sloman claim that AI would be seen as important within a few years?
I inferred that he would characterize it as important in that time frame from:
together with a (perhaps unjustified) assumption that philosophers refrain from calling their colleagues “professionally incompetent” unless the stakes are important. And that they generally do what is fair.