I said they have no memory other than the chat transcript. If you keep chatting in the same chat window then sure, it remembers what was said earlier (up to a point).
But that’s due to a programming trick. The chatbot isn’t even running most of the time. It starts up when you submit your question, and shuts down after it’s finished its reply. When it starts up again, it gets the chat transcript fed into it, which is how it “remembers” what happened previously in the chat session.
If the UI let you edit the chat transcript, then it would have no idea. It would be like you changed its “mind” by editing its “memory”. Which might sound wild, but it’s the same thing as what an author does when they edit the dialog of a fictional character.
Also—I think it would make sense to say it has at least some form of memory of its training data. Maybe not direct as such (just like we have muscle memory from movements we don’t remember—don’t know if that analogy works that well, but thought I would try it anyway), but I mean: if there was no memory of it whatsoever, there would also be no point in the training data.
Ok—points taken, but how is that fundamentally different from a human mind? You too turn your memory on and off when you go to sleep. If the chat transcript is likened to your life / subjective experience, you too do not have any memory that extend beyond it. As for the possibility of an intervention in your brain that would change your memory—granted we do not have the technical capacities quite yet (that I know of), but I’m pretty sure SF has been there a thousand times, and it’s only a question of time before it becomes, in terms of potentiality at least, a thing (also we know that mechanical impacts to the brain can cause amnesia).
I said they have no memory other than the chat transcript. If you keep chatting in the same chat window then sure, it remembers what was said earlier (up to a point).
But that’s due to a programming trick. The chatbot isn’t even running most of the time. It starts up when you submit your question, and shuts down after it’s finished its reply. When it starts up again, it gets the chat transcript fed into it, which is how it “remembers” what happened previously in the chat session.
If the UI let you edit the chat transcript, then it would have no idea. It would be like you changed its “mind” by editing its “memory”. Which might sound wild, but it’s the same thing as what an author does when they edit the dialog of a fictional character.
Also—I think it would make sense to say it has at least some form of memory of its training data. Maybe not direct as such (just like we have muscle memory from movements we don’t remember—don’t know if that analogy works that well, but thought I would try it anyway), but I mean: if there was no memory of it whatsoever, there would also be no point in the training data.
Ok—points taken, but how is that fundamentally different from a human mind? You too turn your memory on and off when you go to sleep. If the chat transcript is likened to your life / subjective experience, you too do not have any memory that extend beyond it. As for the possibility of an intervention in your brain that would change your memory—granted we do not have the technical capacities quite yet (that I know of), but I’m pretty sure SF has been there a thousand times, and it’s only a question of time before it becomes, in terms of potentiality at least, a thing (also we know that mechanical impacts to the brain can cause amnesia).