So much of what we call suffering is physiological, and even when the cause is purely intellectual (eg. in love)-- the body is the vehicle through which it manifests, in the gut, in migraine, tension, Parkinsonism etc. Without access to this, it is hard to imagine LLMs as suffering in any similar way to humans or animals.
I feel like this is a “submarines can’t swim” confusion: chemicals, hormones, muscle, none of these things are the qualia of emotions.
I think a good argument can be made that e.g. chatgpt doesn’t implement an analog of the processing those things cause, but that argument does in fact need to be made, not assumed, if you’re going to argue that a LLM doesn’t have these perceptions.
Surely it is almost the definition of ‘embodied cognition’ that the qualia you mention are fundamentally dependent on loopback with eg. muscles, the gut.
Not only sustained in that way but most everything about their texture, their intricate characteristics.
To my mind it isn’t that chatgpt doesn’t implement an analog of the processing, but it doesn’t implement an analog of the cause. And it seems like a lot of suffering is sustained in this embodied cognition/biofeedback manner.
Do you really think it is not acceptable to assume that LLMs don’t implement any analogues for that kind of thing though?
Maybe the broader point is that there are many things an embodied organism is doing, and using language is only occasionally one of them. It seems safe to assume that an LLM that is specialized on one thing would not spontaneously implement analogues of all the other things that embodied organisms are doing.
Or do you think that is wrong? Do you, eg., think that an LLM would have to develop simulators for things like the gut in order to do its job better, is that what you are implying? Or am I totally misunderstanding you?
I feel like this is a “submarines can’t swim” confusion: chemicals, hormones, muscle, none of these things are the qualia of emotions.
I think a good argument can be made that e.g. chatgpt doesn’t implement an analog of the processing those things cause, but that argument does in fact need to be made, not assumed, if you’re going to argue that a LLM doesn’t have these perceptions.
Surely it is almost the definition of ‘embodied cognition’ that the qualia you mention are fundamentally dependent on loopback with eg. muscles, the gut.
Not only sustained in that way but most everything about their texture, their intricate characteristics.
To my mind it isn’t that chatgpt doesn’t implement an analog of the processing, but it doesn’t implement an analog of the cause. And it seems like a lot of suffering is sustained in this embodied cognition/biofeedback manner.
Are you arguing that you couldn’t implement appropriate feedback mechanisms via the stimulation of truncated nerve endings in an amputated limb?
Not exactly. I suppose you could do so.
Do you really think it is not acceptable to assume that LLMs don’t implement any analogues for that kind of thing though?
Maybe the broader point is that there are many things an embodied organism is doing, and using language is only occasionally one of them. It seems safe to assume that an LLM that is specialized on one thing would not spontaneously implement analogues of all the other things that embodied organisms are doing.
Or do you think that is wrong? Do you, eg., think that an LLM would have to develop simulators for things like the gut in order to do its job better, is that what you are implying? Or am I totally misunderstanding you?