So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that ‘just’ keep up with the scaling?
Specifically, do you think that self-reflective thought already emerges from adding those?
Not totally sure but i think it’s pretty likely that scaling gets us to AGI, yeah. Or more particularly, gets us to the point of AIs being able to act as autonomous researchers or act as high (>10x) multipliers on the productivity of human researchers which seems like the key moment of leverage for deciding how the development to AI will go.
Don’t have a super clean idea of what self-reflective thought means. I see that e.g. GPT-4 can often say something, think further about it, and then revise its opinion. I would expect a little bit of extra reasoning quality and general competence to push this ability a lot further.
The point that you brought up seemed to rest a lot on Hinton’s claims, so it seems that his opinions on timelines and AI progress should be quite important
Do you have any recent source on his claims about AI progress?
More generally, I think the belief that there’s some kind of important advantage that cutting edge AI systems have over humans comes more from human-AI performance comparisons e.g. GPT-4 way outstrips the knowledge about the world of any individual human in terms of like factual understanding (though obv deficient in other ways) with probably 100x less params. A bioanchors based model of AI development would imo predict that this is very unlikely. Whether the core of this advantage is in the form or volume or information density of data, or architecture, or something about the underlying hardware I am less confident.
So in your model how much of the progress to AGI can be made just by adding more compute + more data + working memory + algorithms that ‘just’ keep up with the scaling?
Specifically, do you think that self-reflective thought already emerges from adding those?
Not totally sure but i think it’s pretty likely that scaling gets us to AGI, yeah. Or more particularly, gets us to the point of AIs being able to act as autonomous researchers or act as high (>10x) multipliers on the productivity of human researchers which seems like the key moment of leverage for deciding how the development to AI will go.
Don’t have a super clean idea of what self-reflective thought means. I see that e.g. GPT-4 can often say something, think further about it, and then revise its opinion. I would expect a little bit of extra reasoning quality and general competence to push this ability a lot further.
The point that you brought up seemed to rest a lot on Hinton’s claims, so it seems that his opinions on timelines and AI progress should be quite important
Do you have any recent source on his claims about AI progress?
See e.g. “So I think backpropagation is probably much more efficient than what we have in the brain.” from https://www.therobotbrains.ai/geoff-hinton-transcript-part-one
More generally, I think the belief that there’s some kind of important advantage that cutting edge AI systems have over humans comes more from human-AI performance comparisons e.g. GPT-4 way outstrips the knowledge about the world of any individual human in terms of like factual understanding (though obv deficient in other ways) with probably 100x less params. A bioanchors based model of AI development would imo predict that this is very unlikely. Whether the core of this advantage is in the form or volume or information density of data, or architecture, or something about the underlying hardware I am less confident.