Awareness can only exist in the past. By definition there can be no awareness in each moment of time, since nothing is changing.
The most advanced new chatbots have no internal records of the past sentences they have uttered or responded to, but exist only in the present moment. Therefore they can’t be aware. The more memory that future AIs will have, the more aware they will be.
There is though another point I find interesting related to past vs. current feelings/awareness and illusionism, even if I’m not sure it’s eventually really relevant (and I guess goes not in the direction of what you meant): I wonder whether the differences and parallels between awareness about past feelings and concurrent feelings/awareness can overall help the illusionist defend his illusionism:
Most of us would agree we can theoretically ‘simply’ (well yes, in theory..) rewire/tweak your synapses to give you a wrong memory of your past feelings. We could tweak things in your brain’s memory so that you believe you’ll have had experiences with certain feelings in the past, even if this past experience had never taken place.
If we defend our current view of having had certain past feelings as much as we tend to defend that we now have the sentience/feelings we feel we have, this is interesting, as we have then two categories of insights to our qualia (the past and the current ones) we’re equally willing to defend, all while knowing some of them could have been purely fabricated and never existed.
Do we defend to know that we had our past feelings/sentience, just as much as we do with concurrent feelings/sentience? I’m not sure.
Clearly, being aware of the rewiring possibility described above, we’d easily say: ok, I might be wrong. But more relevant could be if we wonder whether, say, historic humans w/o awareness of their brain structure, of neurons etc. (and thus w/o the rewiring possibility in their mind), whether they would have not insisted just as much that their knowledge about having felt past feelings is just as infallible as their knowledge about their current feelings. I so far see this some sort of support for the possibility of illusionism despite our outrage against it; though not sure yet it’s really watertight.
If the first paragraph in your comment would be entirely true, this could make this line of pro-illusionist argumentation in theory even simpler (though I’m personally not entirely sure your first paragraph really can be stated as simple as that).
[Not entirely sure I read your comment the way you meant]
I guess we must strictly distinguish between what we might call “Functional awareness” and “Emotional awareness” in the sense of “Sentience”.
In this sense, I’d say: Let’s have the future chatbots have more memory of the past and so be more “aware”, but the most immediate thing this gives them is more “Functional awareness”, which means they can take into account their own past conversations too, but if beyond this, their simple mathematical/statistical structure remains roughly as is, for many who currently deny LaMDA sentience, there’s no immediate reason to believe that the new, memory-enhanced bot is sentient. But yes, it might much more seem like it when we interact with it.
Awareness can only exist in the past. By definition there can be no awareness in each moment of time, since nothing is changing.
The most advanced new chatbots have no internal records of the past sentences they have uttered or responded to, but exist only in the present moment. Therefore they can’t be aware. The more memory that future AIs will have, the more aware they will be.
Both GPT-3 chatterbots and LaMDA remember previous parts of the conversation.
What is your evidence for this idiosyncratic definition? What experiment would prove you wrong?
There is though another point I find interesting related to past vs. current feelings/awareness and illusionism, even if I’m not sure it’s eventually really relevant (and I guess goes not in the direction of what you meant): I wonder whether the differences and parallels between awareness about past feelings and concurrent feelings/awareness can overall help the illusionist defend his illusionism:
Most of us would agree we can theoretically ‘simply’ (well yes, in theory..) rewire/tweak your synapses to give you a wrong memory of your past feelings. We could tweak things in your brain’s memory so that you believe you’ll have had experiences with certain feelings in the past, even if this past experience had never taken place.
If we defend our current view of having had certain past feelings as much as we tend to defend that we now have the sentience/feelings we feel we have, this is interesting, as we have then two categories of insights to our qualia (the past and the current ones) we’re equally willing to defend, all while knowing some of them could have been purely fabricated and never existed.
Do we defend to know that we had our past feelings/sentience, just as much as we do with concurrent feelings/sentience? I’m not sure.
Clearly, being aware of the rewiring possibility described above, we’d easily say: ok, I might be wrong. But more relevant could be if we wonder whether, say, historic humans w/o awareness of their brain structure, of neurons etc. (and thus w/o the rewiring possibility in their mind), whether they would have not insisted just as much that their knowledge about having felt past feelings is just as infallible as their knowledge about their current feelings. I so far see this some sort of support for the possibility of illusionism despite our outrage against it; though not sure yet it’s really watertight.
If the first paragraph in your comment would be entirely true, this could make this line of pro-illusionist argumentation in theory even simpler (though I’m personally not entirely sure your first paragraph really can be stated as simple as that).
[Not entirely sure I read your comment the way you meant]
I guess we must strictly distinguish between what we might call “Functional awareness” and “Emotional awareness” in the sense of “Sentience”.
In this sense, I’d say: Let’s have the future chatbots have more memory of the past and so be more “aware”, but the most immediate thing this gives them is more “Functional awareness”, which means they can take into account their own past conversations too, but if beyond this, their simple mathematical/statistical structure remains roughly as is, for many who currently deny LaMDA sentience, there’s no immediate reason to believe that the new, memory-enhanced bot is sentient. But yes, it might much more seem like it when we interact with it.