Are there convergently-ordered developmental milestones for AI? I suspect there may be convergent orderings in which AI capabilities emerge. For example, it seems that LMs develop syntax before semantics, but maybe there’s an even more detailed ordering relative to a fixed dataset. And in embodied tasks with spatial navigation and recurrent memory, there may be an order in which enduring spatial awareness emerges (i.e. “object permanence”).
[A bunch of evidence...]
We might even end up in the world where AI also follows the crow/human/animal developmental milestone ordering, at least roughly up until general intelligence. If so, we could better estimate timelines to AGI by watching how far the AI progresses on the known developmental ordering.
ETA I think that this is seriously dependent on the training data modalities; GPT4 does not have spatial awareness. I think the informativeness of comvergently ordered developmental milestones is seriously reduced because we seem to be in the “spam LLM progress” world, and not the “train multiagent RL setups in simulated 3D environments” world.
I think the informativeness of comvergently ordered developmental milestones is seriously reduced because we seem to be in the “spam LLM progress” world, and not the “train multiagent RL setups in simulated 3D environments” world.
I wrote a shortform on this:
ETA I think that this is seriously dependent on the training data modalities; GPT4 does not have spatial awareness. I think the informativeness of comvergently ordered developmental milestones is seriously reduced because we seem to be in the “spam LLM progress” world, and not the “train multiagent RL setups in simulated 3D environments” world.
Deepmind was very much on that latter path.
Agreed, but that path is far less successful right now.
What can I read to learn more about why that path was less successful?