I’m not convinced a first generation AGI would be “super expert level in most subjects”. I think it’s more likely they’d be extremely capable in some areas but below human level in others. (This does mean the ‘drop-in worker’ comparison isn’t perfect, as presumably people would use them for the stuff they’re really good at rather than any task.) See the section which begins “As of 2024, AI systems have demonstrated extremely uneven capabilities” for more discussion of this and some relevant links. I agree on the knowledge access and communication speed, but think they’re still likely to suffer from hallucination (if they’re LLM-like) which could prove limiting for really difficult problems with lots of steps.
Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI’s will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he “hallucinated” a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your “remote worker” a day or 10 to think through a really hard problem, not even the sky will be the limit...
Our pleasure!
I’m not convinced a first generation AGI would be “super expert level in most subjects”. I think it’s more likely they’d be extremely capable in some areas but below human level in others. (This does mean the ‘drop-in worker’ comparison isn’t perfect, as presumably people would use them for the stuff they’re really good at rather than any task.) See the section which begins “As of 2024, AI systems have demonstrated extremely uneven capabilities” for more discussion of this and some relevant links. I agree on the knowledge access and communication speed, but think they’re still likely to suffer from hallucination (if they’re LLM-like) which could prove limiting for really difficult problems with lots of steps.
Its interesting that you mention hallucination as a bug/artefact, I think that hallucinations is what we humans do all day and everyday when we are trying to solve a new problem. We think up a solution we really believe is correct and then we try it and more often than not realize that we had it all wrong and we try again and again and again. I think AI’s will never be free of this, I just think it will be part of their creative process just as it is in ours. It took Albert Einstein a decade or so to figure out relativity theory, I wonder how many time he “hallucinated” a solution that turned out to be wrong during those years. The important part is that he could self correct and dive deeper and deeper into the problem and finally solve it. I firmly believe that AI will very soon be very good at self correcting, and if you then give your “remote worker” a day or 10 to think through a really hard problem, not even the sky will be the limit...