If a conclusion wasn’t in some sense implicit in the training data or previous prompts, where could it possibly come from? That’s not just a question for LLMs, it’s a question for humans, too. Everything everyone has ever learned was, in some sense, implicit in the data fed into their brains through their senses.
Being “more intelligent” in this sense means being able to make more complex and subtle inferences from the training data, or from less data, or with less computing power.
If a conclusion wasn’t in some sense implicit in the training data or previous prompts, where could it possibly come from? That’s not just a question for LLMs, it’s a question for humans, too. Everything everyone has ever learned was, in some sense, implicit in the data fed into their brains through their senses.
Being “more intelligent” in this sense means being able to make more complex and subtle inferences from the training data, or from less data, or with less computing power.