I think one huge distinction to consider is performance at creating new ideas and capabilities vs performance at things that are already understood.
Human culture contains immense amounts of knowledge, and a major factor for how well you can perform is how much of that knowledge you can absorb (imagine programming without having Google to lookup APIs, or even worse, learning programming from scratch with zero instructions). This is probably a major factor in why human performance varies so much with g.
I don’t think this maps cleanly to the scaling question. On a first look you might think it means AI will inevitably face severe diminishing returns, like the people who are at the forefrunt of scientific knowledge. However:
AI can have a much broader base of human-generated knowledge than humans, and broad bases of knowledge usually enables fruitful cross-pollination across fields.
AI algorithms can be run massively in parallel to develop new ideas (whereas human population is plateauing and many humans are not capable of contributing to scientific progress).
Those new ideas can likely then be cheaply reintegrated into all the parallel systems (by integrating it into one instance and then copying it, whereas humans need to be individually educated, which is expensive), making it feasible to build further on them.
I think one huge distinction to consider is performance at creating new ideas and capabilities vs performance at things that are already understood.
Human culture contains immense amounts of knowledge, and a major factor for how well you can perform is how much of that knowledge you can absorb (imagine programming without having Google to lookup APIs, or even worse, learning programming from scratch with zero instructions). This is probably a major factor in why human performance varies so much with g.
I don’t think this maps cleanly to the scaling question. On a first look you might think it means AI will inevitably face severe diminishing returns, like the people who are at the forefrunt of scientific knowledge. However:
AI can have a much broader base of human-generated knowledge than humans, and broad bases of knowledge usually enables fruitful cross-pollination across fields.
AI algorithms can be run massively in parallel to develop new ideas (whereas human population is plateauing and many humans are not capable of contributing to scientific progress).
Those new ideas can likely then be cheaply reintegrated into all the parallel systems (by integrating it into one instance and then copying it, whereas humans need to be individually educated, which is expensive), making it feasible to build further on them.