I agree, it still wouldn’t be strong evidence for or against. No offence to any present or future sentient machines out there, but self-honesty isn’t really clearly defined for AIs just yet.
My personal feeling is that LSTMs and transformers with attention on past states would explicitly have a form of self-awareness, by definition. Then I think this bears ethical significance according to something like the compression ratio of the inputs.
As a side note, I enjoy Iain M Banks representation of how AIs could communicate emotions in future in addition to language—by changing colour across a rich field of hues. This doesn’t try to make a direct analogy to our emotions and in that sense makes the problem clearer as, in a sense, a clustering of internal states.
I agree, it still wouldn’t be strong evidence for or against. No offence to any present or future sentient machines out there, but self-honesty isn’t really clearly defined for AIs just yet.
My personal feeling is that LSTMs and transformers with attention on past states would explicitly have a form of self-awareness, by definition. Then I think this bears ethical significance according to something like the compression ratio of the inputs.
As a side note, I enjoy Iain M Banks representation of how AIs could communicate emotions in future in addition to language—by changing colour across a rich field of hues. This doesn’t try to make a direct analogy to our emotions and in that sense makes the problem clearer as, in a sense, a clustering of internal states.