It doesn’t seem clear to me what those two demonstrations are trying to test. 1 seems like a case of over-fitting. 2 seems like an extension of 1 except it’s the case with papers, not sure how the papers case has anything to do with the generalized capabilities of ChatGPT. If you think ChatGPT is merely a complex lookup-table, then I don’t really know what to say. Lookup-table or NLP, I don’t know how either has much to do with general intelligence. Both are models that may seem intelligent if that’s where the discussion is focusing on. Honestly, I don’t really understand a lot of the stuff discussed on this site.
It doesn’t seem clear to me what those two demonstrations are trying to test. 1 seems like a case of over-fitting. 2 seems like an extension of 1 except it’s the case with papers, not sure how the papers case has anything to do with the generalized capabilities of ChatGPT. If you think ChatGPT is merely a complex lookup-table, then I don’t really know what to say. Lookup-table or NLP, I don’t know how either has much to do with general intelligence. Both are models that may seem intelligent if that’s where the discussion is focusing on. Honestly, I don’t really understand a lot of the stuff discussed on this site.