As I understand it, there is a psychological (Mahowald et al.) and philosophical (Shanahan) that machines can’t “think” (and do related stuff).
I don’t find Mahowald et al. always convincing because it suffers from straw manning LLM’s—much of the claims about limitations of LLM’s were based on old work which predates GPT-3.5/ChatGPT. Clearly the bulk of the paper was written before ChatGPT launched, and I suspect they didn’t want to make substantial changes to it, because it would undermine their arguments. And I find the OP good at taking down a range of arguments that they provide.
I find the Shanahan argument stronger, or at least my take on the philosophical argument. This is something like we take words like “think” as folk psychological theories, which are subject to philosophical analysis and reflection. And based on these definitions of thinking, it is a category error to describe these machines as thinking (and other folk psychological constructs such as belief, desire etc...).
This seems correct to me, but like much of philosophy it comes down to how you define words. As Bill says in a comment here: “the concepts and terms we use for talking about human behavior are the best we have at the moment. I think we need new terms and concepts.” . From the philosophical point of view a lot depends on how you view the link between the folk psychology terms and the underlying substrate, and philosophers have fully mapped out the terrain of possible views, e.g. eliminativist (which might result in new terms), reductionist, non-reductionist. And how you view the relationship between folk psychology and the brain will influence how you view folk psychological terms and AI substrates of intelligent behaviour.
As I understand it, there is a psychological (Mahowald et al.) and philosophical (Shanahan) that machines can’t “think” (and do related stuff).
I don’t find Mahowald et al. always convincing because it suffers from straw manning LLM’s—much of the claims about limitations of LLM’s were based on old work which predates GPT-3.5/ChatGPT. Clearly the bulk of the paper was written before ChatGPT launched, and I suspect they didn’t want to make substantial changes to it, because it would undermine their arguments. And I find the OP good at taking down a range of arguments that they provide.
I find the Shanahan argument stronger, or at least my take on the philosophical argument. This is something like we take words like “think” as folk psychological theories, which are subject to philosophical analysis and reflection. And based on these definitions of thinking, it is a category error to describe these machines as thinking (and other folk psychological constructs such as belief, desire etc...).
This seems correct to me, but like much of philosophy it comes down to how you define words. As Bill says in a comment here: “the concepts and terms we use for talking about human behavior are the best we have at the moment. I think we need new terms and concepts.” . From the philosophical point of view a lot depends on how you view the link between the folk psychology terms and the underlying substrate, and philosophers have fully mapped out the terrain of possible views, e.g. eliminativist (which might result in new terms), reductionist, non-reductionist. And how you view the relationship between folk psychology and the brain will influence how you view folk psychological terms and AI substrates of intelligent behaviour.