Surely it is almost the definition of ‘embodied cognition’ that the qualia you mention are fundamentally dependent on loopback with eg. muscles, the gut.
Not only sustained in that way but most everything about their texture, their intricate characteristics.
To my mind it isn’t that chatgpt doesn’t implement an analog of the processing, but it doesn’t implement an analog of the cause. And it seems like a lot of suffering is sustained in this embodied cognition/biofeedback manner.
Do you really think it is not acceptable to assume that LLMs don’t implement any analogues for that kind of thing though?
Maybe the broader point is that there are many things an embodied organism is doing, and using language is only occasionally one of them. It seems safe to assume that an LLM that is specialized on one thing would not spontaneously implement analogues of all the other things that embodied organisms are doing.
Or do you think that is wrong? Do you, eg., think that an LLM would have to develop simulators for things like the gut in order to do its job better, is that what you are implying? Or am I totally misunderstanding you?
Surely it is almost the definition of ‘embodied cognition’ that the qualia you mention are fundamentally dependent on loopback with eg. muscles, the gut.
Not only sustained in that way but most everything about their texture, their intricate characteristics.
To my mind it isn’t that chatgpt doesn’t implement an analog of the processing, but it doesn’t implement an analog of the cause. And it seems like a lot of suffering is sustained in this embodied cognition/biofeedback manner.
Are you arguing that you couldn’t implement appropriate feedback mechanisms via the stimulation of truncated nerve endings in an amputated limb?
Not exactly. I suppose you could do so.
Do you really think it is not acceptable to assume that LLMs don’t implement any analogues for that kind of thing though?
Maybe the broader point is that there are many things an embodied organism is doing, and using language is only occasionally one of them. It seems safe to assume that an LLM that is specialized on one thing would not spontaneously implement analogues of all the other things that embodied organisms are doing.
Or do you think that is wrong? Do you, eg., think that an LLM would have to develop simulators for things like the gut in order to do its job better, is that what you are implying? Or am I totally misunderstanding you?