The “mechanism” as I describe it is succinctly, ’what Google is already doing, but 1-3 orders of magnitdue higher”. Gato solves ~200 tasks to human level. How many tasks does the average human learn to do competently in their lifetime? 2000? 20k? 200k?
It simply doesn’t matter which it is, all are are within the space of “could plausibly solve within 10 years”.
Whatever it is, it’s bounded, and likely the same architecture can be extended to handle all the tasks. I mention bootstrapping (because ‘writing software to solve a prompt’ is a task, and ‘designing an AI model to do well on an AGI task’ is a task) because it’s the obvious way to get a huge boost in performance to solve this problem quickly.
The “mechanism” you describe for AGI does not at all sound like something that will produce results within any predictable time.
? Did you not read https://www.deepmind.com/publications/a-generalist-agent or https://github.com/google/BIG-bench or https://cloud.google.com/automl or any of the others?
The “mechanism” as I describe it is succinctly, ’what Google is already doing, but 1-3 orders of magnitdue higher”. Gato solves ~200 tasks to human level. How many tasks does the average human learn to do competently in their lifetime? 2000? 20k? 200k?
It simply doesn’t matter which it is, all are are within the space of “could plausibly solve within 10 years”.
Whatever it is, it’s bounded, and likely the same architecture can be extended to handle all the tasks. I mention bootstrapping (because ‘writing software to solve a prompt’ is a task, and ‘designing an AI model to do well on an AGI task’ is a task) because it’s the obvious way to get a huge boost in performance to solve this problem quickly.