I like the idea of a “philosophical edge”, but what it brings to mind is more the Dennett quote (don’t remember whether the idea originates with him, would expect that it doesn’t but don’t know) to the effect that philosophy (as opposed to science) is what you do when you haven’t yet figured out what the right questions to ask are. (Not 100% right match for the tiling paper, but going in the right direction.)
On the other hand, I never liked the famous “it stops being called AI as soon as people start using it” meme you’re quoting, because that always struck me as a completely reasonable position to take. Surely pattern recognition, image-processing and rule-based systems aren’t obviously huge steps towards passing the Turing test, and although I’m willing to call narrow AI “narrow artificial intelligence” because I see no reason to embark on the fool’s errand of trying to change that terminology, I can’t really blame people for measuring “AI” research against the standard of general intelligence. And yes, it’s quite possible that pattern recognition, image processing and rule-based systems are necessary baby steps on the road to AGI, but if someone in their best judgment thinks that they’re probably not, I don’t see why they’re obviously wrong. And just because your research into alchemy lead to important insights into chemistry, you don’t get to call all chemistry research “alchemy” (with obvious analogy caveat that the metal-to-gold-by-magic-symbols goal of alchemy is bunk and we have an existence proof of AGI).
I like the idea of a “philosophical edge”, but what it brings to mind is more the Dennett quote (don’t remember whether the idea originates with him, would expect that it doesn’t but don’t know) to the effect that philosophy (as opposed to science) is what you do when you haven’t yet figured out what the right questions to ask are. (Not 100% right match for the tiling paper, but going in the right direction.)
On the other hand, I never liked the famous “it stops being called AI as soon as people start using it” meme you’re quoting, because that always struck me as a completely reasonable position to take. Surely pattern recognition, image-processing and rule-based systems aren’t obviously huge steps towards passing the Turing test, and although I’m willing to call narrow AI “narrow artificial intelligence” because I see no reason to embark on the fool’s errand of trying to change that terminology, I can’t really blame people for measuring “AI” research against the standard of general intelligence. And yes, it’s quite possible that pattern recognition, image processing and rule-based systems are necessary baby steps on the road to AGI, but if someone in their best judgment thinks that they’re probably not, I don’t see why they’re obviously wrong. And just because your research into alchemy lead to important insights into chemistry, you don’t get to call all chemistry research “alchemy” (with obvious analogy caveat that the metal-to-gold-by-magic-symbols goal of alchemy is bunk and we have an existence proof of AGI).