...the question of whether or not the language produced by LLMs is meaningful is up to us. Do you trust it? Do WE trust it? Why or why not?
That’s the position I’m considering. If you understand “WE” to mean society as a whole, then the answer is that the question is under discussion and is undetermined. But some individuals do seem to trust the text from certain LLMs at least under certain circumstances. For the most part I trust the output of ChatGPT and GPT-4, with which I have considerably less experience than I do with ChatGPT. I know that both systems make mistakes of various kinds, including what is called “hallucination.” It’s not clear to me that that differentiates them from ordinary humans, who make mistakes and often say things without foundation in reality.
I’ve just posted something at my home blog, New Savanna, in which I consider the idea that