capable of contributing to its training, re-design, agentization, etc, long before “genius level” is reached
In some models of the world this is seen as unlikely to ever happen, these things are expected to coincide, which collapses the two definitions of AGI. I think the disparity between sample efficiency of in-context learning and that of pre-training is one illustration for how these capabilities might come apart, in the direction that’s opposite to what you point to: even genius in-context learning doesn’t necessarily enable the staying power of agency, if this transient understanding can’t be stockpiled and the achieved level of genius is insufficient to resolve the issue while remaining within its limitations (being unable to learn a lot of novel things in the course of a project).
In some models of the world this is seen as unlikely to ever happen, these things are expected to coincide, which collapses the two definitions of AGI. I think the disparity between sample efficiency of in-context learning and that of pre-training is one illustration for how these capabilities might come apart, in the direction that’s opposite to what you point to: even genius in-context learning doesn’t necessarily enable the staying power of agency, if this transient understanding can’t be stockpiled and the achieved level of genius is insufficient to resolve the issue while remaining within its limitations (being unable to learn a lot of novel things in the course of a project).