I think the use of the term “AGI” without a specific definition is causing an issue here—IMHO the crux of the matter is the difference between the progress in average performance vs worst-case performance. We are having amazing progress in the former, but struggling with the latter (LLM hallucinations, etc). And robotaxis require an almost-perfect performance.
All the tasks that are about using creativity to solve problems are okay with performance that isn’t perfect. Scientists make a lot of mistakes but those things that scientists get right are producing a lot of value.
I think the use of the term “AGI” without a specific definition is causing an issue here—IMHO the crux of the matter is the difference between the progress in average performance vs worst-case performance. We are having amazing progress in the former, but struggling with the latter (LLM hallucinations, etc). And robotaxis require an almost-perfect performance.
That begs the question: do AGIs not require an almost-perfect performance?
All the tasks that are about using creativity to solve problems are okay with performance that isn’t perfect. Scientists make a lot of mistakes but those things that scientists get right are producing a lot of value.