AGI is about knowing how to do lots of things
AGI is about not knowing how to do something, and then being able to figure it out.
I’m strongly in the second camp.
″
I very much agree with this, and I think measuring perplexity confuses these two things (ability to recite/compose memorized knowledge vs ability to generate novel insights). I think you need an entirely different benchmark, based on something like giving a model only the information available to Newton before he came up with his laws of motion and seeing if the model can come up with equivalent insights.
In Stephen Byrnes recent post https://www.lesswrong.com/posts/PDx4ueLpvz5gxPEus/why-i-m-not-working-on-debate-rrm-elk-natural-abstractions he states: ” Two different perspectives are:
I’m strongly in the second camp.
″ I very much agree with this, and I think measuring perplexity confuses these two things (ability to recite/compose memorized knowledge vs ability to generate novel insights). I think you need an entirely different benchmark, based on something like giving a model only the information available to Newton before he came up with his laws of motion and seeing if the model can come up with equivalent insights.