No, I don’t see a real distinction here. If you increase skull size, you increase the rate at which new abilities are invented and combined. If you come up with a mathematical idea, you advance a whole swath of ability-seeking searches. I listed some other things that increase meta-ability. What’s the distinction between various things that hit back to the meta-level?
There is an enormous difference between “increase skull size” when already well into diminishing returns for brain size given only 1e9s of training data, and an improvement that allows compressing knowledge, externalizing it, and sharing it permanently to train new minds.
After that cultural transition, each new mind can train on the compressed summary experiences of all previous minds of the tribe/nation/civilization. You go from having only 1e9s of training data that is thrown away when each individual dies, to having an effective training dataset that scales with total extant integrated population over time. It is a radical shift to a fundemental new scaling equation, and that is why it is a metasystems transition, whereas increasing skull size is not.
Increasing skull size would also let you have much larger working memory, have multiple trains of thought but still with high interconnect, etc., which would let you work on problems that are too hard to fit in one normal human’s working memory.
I simply don’t buy the training data limit. You have infinite free training data from internal events, aka math.
More zoomed out, I still haven’t seen you argue why there aren’t more shifts that change the scaling equation. (I’ve listed some that I think would do so.)
The distinction is that without the initial 0-1 phase transition, none of the other stuff is possible. They are all instances of cumulative cultural accretion, whereas the transition constitutes entering the regime of cumulative cultural accretion (other biological organisms and extant AI systems are not in this regime). If I understand the author correctly, the creation of AGI will increase the pace of cumulative cultural accretion, but will not lead us (or them) to exit that regime (since, according to the point about universality, there is no further regime).
I think this answer also applies to the other comment you made, for what it’s worth. It would take me more time than I am willing to spend to make a cogent case for this here, so I will leave the discussion for now.
No, I don’t see a real distinction here. If you increase skull size, you increase the rate at which new abilities are invented and combined. If you come up with a mathematical idea, you advance a whole swath of ability-seeking searches. I listed some other things that increase meta-ability. What’s the distinction between various things that hit back to the meta-level?
There is an enormous difference between “increase skull size” when already well into diminishing returns for brain size given only 1e9s of training data, and an improvement that allows compressing knowledge, externalizing it, and sharing it permanently to train new minds.
After that cultural transition, each new mind can train on the compressed summary experiences of all previous minds of the tribe/nation/civilization. You go from having only 1e9s of training data that is thrown away when each individual dies, to having an effective training dataset that scales with total extant integrated population over time. It is a radical shift to a fundemental new scaling equation, and that is why it is a metasystems transition, whereas increasing skull size is not.
Increasing skull size would also let you have much larger working memory, have multiple trains of thought but still with high interconnect, etc., which would let you work on problems that are too hard to fit in one normal human’s working memory.
I simply don’t buy the training data limit. You have infinite free training data from internal events, aka math.
More zoomed out, I still haven’t seen you argue why there aren’t more shifts that change the scaling equation. (I’ve listed some that I think would do so.)
The distinction is that without the initial 0-1 phase transition, none of the other stuff is possible. They are all instances of cumulative cultural accretion, whereas the transition constitutes entering the regime of cumulative cultural accretion (other biological organisms and extant AI systems are not in this regime). If I understand the author correctly, the creation of AGI will increase the pace of cumulative cultural accretion, but will not lead us (or them) to exit that regime (since, according to the point about universality, there is no further regime).
I think this answer also applies to the other comment you made, for what it’s worth. It would take me more time than I am willing to spend to make a cogent case for this here, so I will leave the discussion for now.
Ok. I think you’re confused though; other things we’ve discussed are pretty much as 0 to 1 as cultural accumulation.