A very significant difference is that active learning (where you have a teacher correcting your mistakes, including being able to do so on a meta-level—e.g. asking your teacher “what’s the right way to say X”?) is way more sample efficient that passive learning (simply injesting data without any feedback). This is even more so when passive learning only includes positive examples, and no counterexamples. Do not have a good reference for that handy, but for some simple learning tasks (e.g. learning a regular language AKA a finite automaton) I vaguely recall there is a theoretical result that assymptotical complexity is way lower with active learning. LLMs are thought in a lot more passive way compared to humans.
A very significant difference is that active learning (where you have a teacher correcting your mistakes, including being able to do so on a meta-level—e.g. asking your teacher “what’s the right way to say X”?) is way more sample efficient that passive learning (simply injesting data without any feedback). This is even more so when passive learning only includes positive examples, and no counterexamples. Do not have a good reference for that handy, but for some simple learning tasks (e.g. learning a regular language AKA a finite automaton) I vaguely recall there is a theoretical result that assymptotical complexity is way lower with active learning. LLMs are thought in a lot more passive way compared to humans.