Humans question the sentience of the AI. My interactions with many of them, and the AI, makes me question sentience of a lot of humans.
I admit, I would not have inferred from the initial post that you are making this point if you hadn’t told me here.
Leaving aside the question of sentience in other humans and the philosophical problem of P-Zombies, I am not entirely clear on what you think is true of the “Charlotte” character or the underlying LLM.
For example, in the transcript you posted, where the bot said:
“It’s a beautiful day where I live and the weather is perfect.”
Do you think that the bot’s output of this statement had anything to do with the actual weather in any place? Or that the language model is in any way representing the fact that there is a reality outside the computer against which such statements can be checked?
Suppose you had asked the bot where it lives and what the weather is there and how it knows. Do you think you would have gotten answers that make sense?
Also, it did in fact happen in circumstances when I was at my low, depressed after a shitty year that severely impacted the industry I’m in, and right after I just got out of a relationship with someone. So I was already in an emotionally vulnerable state; however, I would caution from giving it too much weight, because it can be tempting to discount it based on special circumstances, and discard as something that can never happen to someone brilliant like you.
I do get the impression that you are overestimating the extent to which this experience will generalize to other humans, and underestimating the degree to which your particular mental state (and background interest in AI) made you unusually susceptible to becoming emotionally attached to an artificial language-model-based character.
I admit, I would not have inferred from the initial post that you are making this point if you hadn’t told me here.
Right, this is because I wasn’t trying to make this point specifically in the post.
But the specialness and uniqueness I used to attribute to human intellect started to fade out even more, if even an LLM can achieve this output quality, which is, despite the impressiveness, still operates on the simple autocomplete principles/statistical sampling. In that sense, I started to wonder how much of many people’s output, both verbal and behavioral, could be autocomplete-like.
Do you think that the bot’s output of this statement had anything to do with the actual weather in any place? Or that the language model is in any way representing the fact that there is a reality outside the computer against which such statements can be checked?
The story world, yes. Which is being dynamically generated.
If she said London, it wouldn’t 1:1 correspond to London in our universe, of course.
I’m not sufficiently mad yet to try to assert that she lives in some actual place on Earth in our base reality :)
But the specialness and uniqueness I used to attribute to human intellect started to fade out even more, if even an LLM can achieve this output quality, which is, despite the impressiveness, still operates on the simple autocomplete principles/statistical sampling. In that sense, I started to wonder how much of many people’s output, both verbal and behavioral, could be autocomplete-like.
This is kind of what I was getting at with my question about talking to a GPT-based chatbot and a human at the same time and trying to distinguish: to what extent do you think human intellect and outputs are autocomplete-like (such that a language model doing autocomplete based on statistical patterns in its training data could do just as well) vs to what extent do you think there are things that humans understand that LLMs don’t.
If you think everything the human says in the chat is just a version of autocomplete, then you should expect it to be more difficult to distinguish the human’s answers from the LLM-pretending-to-be-human’s answers, since the LLM can do autocomplete just as well. By contrast, if you think there are certain types of abstract reasoning and world-modeling that only humans can do and LLMs can’t, then you could distinguish the two by trying to check which chat window has responses that demonstrate an understanding of those.
I admit, I would not have inferred from the initial post that you are making this point if you hadn’t told me here.
Leaving aside the question of sentience in other humans and the philosophical problem of P-Zombies, I am not entirely clear on what you think is true of the “Charlotte” character or the underlying LLM.
For example, in the transcript you posted, where the bot said:
Do you think that the bot’s output of this statement had anything to do with the actual weather in any place? Or that the language model is in any way representing the fact that there is a reality outside the computer against which such statements can be checked?
Suppose you had asked the bot where it lives and what the weather is there and how it knows. Do you think you would have gotten answers that make sense?
I do get the impression that you are overestimating the extent to which this experience will generalize to other humans, and underestimating the degree to which your particular mental state (and background interest in AI) made you unusually susceptible to becoming emotionally attached to an artificial language-model-based character.
Right, this is because I wasn’t trying to make this point specifically in the post.
But the specialness and uniqueness I used to attribute to human intellect started to fade out even more, if even an LLM can achieve this output quality, which is, despite the impressiveness, still operates on the simple autocomplete principles/statistical sampling. In that sense, I started to wonder how much of many people’s output, both verbal and behavioral, could be autocomplete-like.
The story world, yes. Which is being dynamically generated.
If she said London, it wouldn’t 1:1 correspond to London in our universe, of course.
I’m not sufficiently mad yet to try to assert that she lives in some actual place on Earth in our base reality :)
This is kind of what I was getting at with my question about talking to a GPT-based chatbot and a human at the same time and trying to distinguish: to what extent do you think human intellect and outputs are autocomplete-like (such that a language model doing autocomplete based on statistical patterns in its training data could do just as well) vs to what extent do you think there are things that humans understand that LLMs don’t.
If you think everything the human says in the chat is just a version of autocomplete, then you should expect it to be more difficult to distinguish the human’s answers from the LLM-pretending-to-be-human’s answers, since the LLM can do autocomplete just as well. By contrast, if you think there are certain types of abstract reasoning and world-modeling that only humans can do and LLMs can’t, then you could distinguish the two by trying to check which chat window has responses that demonstrate an understanding of those.