I don’t know what the correct definition of AGI is, but to me it seems that AGI is ASI. Imagine an AI that is on super expert level in most (>95%) subjects and that have access to pretty much all human knowledge and is capable of digesting millions of tokens at a time and and can draw inferences and conclusions from that in seconds. “We” normally have a handful of real geniuses per generation. So now a simulated person that is like Stephen Hawkings in Physics, Terrence Tao in Math, Rembrandt in painting etc etc, all at the same time. Now imagine that you have “just” 40.000-100.000 of these simulated persons able to communicate at the speed of light and that can use all the knowledge in the world within millisecond. I think there there will be a very transformative experience for our society from the get go.
I’m not convinced a first generation AGI would be “super expert level in most subjects”. I think it’s more likely they’d be extremely capable in some areas but below human level in others. (This does mean the ‘drop-in worker’ comparison isn’t perfect, as presumably people would use them for the stuff they’re really good at rather than any task.) See the section which begins “As of 2024, AI systems have demonstrated extremely uneven capabilities” for more discussion of this and some relevant links. I agree on the knowledge access and communication speed, but think they’re still likely to suffer from hallucination (if they’re LLM-like) which could prove limiting for really difficult problems with lots of steps.
Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI’s will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he “hallucinated” a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your “remote worker” a day or 10 to think through a really hard problem, not even the sky will be the limit...
Thanks for writing this post!
I don’t know what the correct definition of AGI is, but to me it seems that AGI is ASI. Imagine an AI that is on super expert level in most (>95%) subjects and that have access to pretty much all human knowledge and is capable of digesting millions of tokens at a time and and can draw inferences and conclusions from that in seconds. “We” normally have a handful of real geniuses per generation. So now a simulated person that is like Stephen Hawkings in Physics, Terrence Tao in Math, Rembrandt in painting etc etc, all at the same time. Now imagine that you have “just” 40.000-100.000 of these simulated persons able to communicate at the speed of light and that can use all the knowledge in the world within millisecond. I think there there will be a very transformative experience for our society from the get go.
Our pleasure!
I’m not convinced a first generation AGI would be “super expert level in most subjects”. I think it’s more likely they’d be extremely capable in some areas but below human level in others. (This does mean the ‘drop-in worker’ comparison isn’t perfect, as presumably people would use them for the stuff they’re really good at rather than any task.) See the section which begins “As of 2024, AI systems have demonstrated extremely uneven capabilities” for more discussion of this and some relevant links. I agree on the knowledge access and communication speed, but think they’re still likely to suffer from hallucination (if they’re LLM-like) which could prove limiting for really difficult problems with lots of steps.
Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI’s will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he “hallucinated” a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your “remote worker” a day or 10 to think through a really hard problem, not even the sky will be the limit...