To me, ChatGPT reads like people would explain their reasoning missteps. That’s because most people don’t systematically reason all the time—or have a comprehensive world model.
Most people seem to go through life on rote, seemingly not recognizing when something doesn’t make sense because they don’t expect anything to make sense.
For me one of the biggest surprises about current generative AI research is that it yields artificial pseudo-intellectuals: programs that, given sufficient examples to copy, can do a plausible imitation of talking about something they understand.
I don’t mean this as an attack on this form of AI. The imitations continue to improve. If they get good enough, we’re splitting hairs talking about whether they “actually” understand what they’re saying. I just didn’t expect this to be the way in.
This approach arguably takes the Turing Test too literally. If it peters out, that will be its epitaph. If it succeeds, Turing will seem to have been transcendently wise.
Had it got it right, that would have probably meant that it memorized this specific, very common question. Memorising things isn’t that impressive and memorising one specific thing does not say anything about capabilties as a one line program could “memorize” this one sentence. This way, however, we can be sure that it thinks for itself, incorrectly in this case sure, but still.
To me, ChatGPT reads like people would explain their reasoning missteps. That’s because most people don’t systematically reason all the time—or have a comprehensive world model.
And the same applies to most text ChatGPT has seen.
ChatGPT can’t concentrate and reason systematically at all, though the “let’s think step by step” is maybe a step (sic) in that direction). Humans Who Are Not Concentrating Are Not General Intelligences and ChatGPT is quite a lot like that. If you expect to discuss with ChatGPT like with a rationalist, you are up for disappointment. Quite an understandable disappointment. Paul Graham on Twitter today:
GPT also has problems with the Linda problem for the same reason:
https://twitter.com/dggoldst/status/1598317411698089984
Do people in that thread understand how gpt getting eg the ball+bat question wrong is more impressive than it getting it right or should I elaborate?
Please elaborate.
Had it got it right, that would have probably meant that it memorized this specific, very common question. Memorising things isn’t that impressive and memorising one specific thing does not say anything about capabilties as a one line program could “memorize” this one sentence. This way, however, we can be sure that it thinks for itself, incorrectly in this case sure, but still.