But the specialness and uniqueness I used to attribute to human intellect started to fade out even more, if even an LLM can achieve this output quality, which is, despite the impressiveness, still operates on the simple autocomplete principles/statistical sampling. In that sense, I started to wonder how much of many people’s output, both verbal and behavioral, could be autocomplete-like.
This is kind of what I was getting at with my question about talking to a GPT-based chatbot and a human at the same time and trying to distinguish: to what extent do you think human intellect and outputs are autocomplete-like (such that a language model doing autocomplete based on statistical patterns in its training data could do just as well) vs to what extent do you think there are things that humans understand that LLMs don’t.
If you think everything the human says in the chat is just a version of autocomplete, then you should expect it to be more difficult to distinguish the human’s answers from the LLM-pretending-to-be-human’s answers, since the LLM can do autocomplete just as well. By contrast, if you think there are certain types of abstract reasoning and world-modeling that only humans can do and LLMs can’t, then you could distinguish the two by trying to check which chat window has responses that demonstrate an understanding of those.
This is kind of what I was getting at with my question about talking to a GPT-based chatbot and a human at the same time and trying to distinguish: to what extent do you think human intellect and outputs are autocomplete-like (such that a language model doing autocomplete based on statistical patterns in its training data could do just as well) vs to what extent do you think there are things that humans understand that LLMs don’t.
If you think everything the human says in the chat is just a version of autocomplete, then you should expect it to be more difficult to distinguish the human’s answers from the LLM-pretending-to-be-human’s answers, since the LLM can do autocomplete just as well. By contrast, if you think there are certain types of abstract reasoning and world-modeling that only humans can do and LLMs can’t, then you could distinguish the two by trying to check which chat window has responses that demonstrate an understanding of those.