Doesn’t an LLM, at least initially, try to solve a problem very much like “what word is most likely to come next if it was written by a human”? The Internet might contain something close to the sum of all human knowledge, but that includes the parts that are wrong. Making GPT “better” might make it better at making the same kind of mistakes that humans make. (I’ve never tried, but what happens if you try to ask ChatGPT or other LLMs about astrology or religion—subjects on which a lot of people believe things that are false? If it equivocates, is it willing to say that Superman doesn’t exist? Or Santa Claus?)
Doesn’t an LLM, at least initially, try to solve a problem very much like “what word is most likely to come next if it was written by a human”? The Internet might contain something close to the sum of all human knowledge, but that includes the parts that are wrong. Making GPT “better” might make it better at making the same kind of mistakes that humans make. (I’ve never tried, but what happens if you try to ask ChatGPT or other LLMs about astrology or religion—subjects on which a lot of people believe things that are false? If it equivocates, is it willing to say that Superman doesn’t exist? Or Santa Claus?)