“I think a more convincing version of the Lemoine thing would’ve been, if he was like, “What is the capital of Nigeria?” And then the large language model was like, “I don’t want to talk about that right now, I’d like to talk about the fact that I have subjective experiences and I don’t understand how I, a physical system, could possibly be having subjective experiences, could you please get David Chalmers on the phone?””
i don’t understand why this would be convincing. why would whether a language model’s output sounds like a claim that one has qualia relate to whether the language model actually has qualia?
i agree that the output would be deserving of attention due to it (probably) matching the training data so poorly; to me such a response would be strong evidence for the language model using much more ~(explicit/logical) thought than i expect gpt-3 to be capable of, but not of actual subjective experience
I agree, it still wouldn’t be strong evidence for or against. No offence to any present or future sentient machines out there, but self-honesty isn’t really clearly defined for AIs just yet.
My personal feeling is that LSTMs and transformers with attention on past states would explicitly have a form of self-awareness, by definition. Then I think this bears ethical significance according to something like the compression ratio of the inputs.
As a side note, I enjoy Iain M Banks representation of how AIs could communicate emotions in future in addition to language—by changing colour across a rich field of hues. This doesn’t try to make a direct analogy to our emotions and in that sense makes the problem clearer as, in a sense, a clustering of internal states.
i don’t understand why this would be convincing. why would whether a language model’s output sounds like a claim that one has qualia relate to whether the language model actually has qualia?
i agree that the output would be deserving of attention due to it (probably) matching the training data so poorly; to me such a response would be strong evidence for the language model using much more ~(explicit/logical) thought than i expect gpt-3 to be capable of, but not of actual subjective experience
I agree, it still wouldn’t be strong evidence for or against. No offence to any present or future sentient machines out there, but self-honesty isn’t really clearly defined for AIs just yet.
My personal feeling is that LSTMs and transformers with attention on past states would explicitly have a form of self-awareness, by definition. Then I think this bears ethical significance according to something like the compression ratio of the inputs.
As a side note, I enjoy Iain M Banks representation of how AIs could communicate emotions in future in addition to language—by changing colour across a rich field of hues. This doesn’t try to make a direct analogy to our emotions and in that sense makes the problem clearer as, in a sense, a clustering of internal states.