General intelligence might be an emergent property—something you can get from scaling a model. But it’s not clear what is the basic model which, if scaled, leads to it. It would be interesting to consider how to make progress identifying what that is. How do you know if the model you’re scaling has a peak intelligence that doesn’t fall short of ‘general intelligence’? How do you know when it’s time to stop scaling and explore a new model?
I guess there’s a hard limit on the scale of models that can be explored though. If it’s not practical and it doesn’t cut it, it’s time to try something new. But it’s still interesting to ask if there’s any way to determine that there’s still some juice in the model which hasn’t been squeezed out. Identifying the scale required, or even some vague sense of it, to achieve general intelligence feels important
General intelligence might be an emergent property—something you can get from scaling a model. But it’s not clear what is the basic model which, if scaled, leads to it. It would be interesting to consider how to make progress identifying what that is. How do you know if the model you’re scaling has a peak intelligence that doesn’t fall short of ‘general intelligence’? How do you know when it’s time to stop scaling and explore a new model?
I guess there’s a hard limit on the scale of models that can be explored though. If it’s not practical and it doesn’t cut it, it’s time to try something new. But it’s still interesting to ask if there’s any way to determine that there’s still some juice in the model which hasn’t been squeezed out. Identifying the scale required, or even some vague sense of it, to achieve general intelligence feels important