It just means specific innovations that have especially big increases in intelligence. But I think that lots of innovations, such as mathematical ideas, have big increases in intelligence.
Okay, sure. If my impression of the original post is right, the author would not disagree with you, but would rather claim that there is an important distinction to be made among these innovations. Namely, one of them is the 0-1 transition to universality, and the others are not. So, do you disagree that such a distinction may be important at all, or merely that it is not a distinction that supports the argument made in the original post?
It would be a large, broad increase in intelligence. There may be other large broad increases in intelligence. I think there are also other large narrow increases, and small broad increases. Jacob seems to be claiming that there aren’t further large increases to be had. I think the transition to universality is pretty vague. Wouldn’t increasing memory capacity also be a sort of increase in universality?
I have to say I agree that there is vagueness in the transition to universality. That is hardly surprising seeing as it is a confusing and contentious subject that involves integrating perspectives on a number of other confusing and contentious subjects (language, biological evolution, cultural evolution, collective intelligence etc...). However, despite the vagueness, I personally still see this transition, from being unable to accrete cultural innovations to being able to do so, as a special one, different in kind from particular technologies that have been invented since.
Perhaps another way to put it is that the transition seems to bestow on us, as a collective, a meta-ability to obtain new abilities (or increased intelligence, as you put it), that we previously lacked. It is true that there are particular new abilities that are particularly valuable, but there may not be any further meta-abilities to obtain.
Just so we aren’t speaking past each other. Do you get what I am saying here? Even if you disagree that this is relevant, which may be reasonable, does the distinction I am driving at even make sense to you, or still not?
No, I don’t see a real distinction here. If you increase skull size, you increase the rate at which new abilities are invented and combined. If you come up with a mathematical idea, you advance a whole swath of ability-seeking searches. I listed some other things that increase meta-ability. What’s the distinction between various things that hit back to the meta-level?
There is an enormous difference between “increase skull size” when already well into diminishing returns for brain size given only 1e9s of training data, and an improvement that allows compressing knowledge, externalizing it, and sharing it permanently to train new minds.
After that cultural transition, each new mind can train on the compressed summary experiences of all previous minds of the tribe/nation/civilization. You go from having only 1e9s of training data that is thrown away when each individual dies, to having an effective training dataset that scales with total extant integrated population over time. It is a radical shift to a fundemental new scaling equation, and that is why it is a metasystems transition, whereas increasing skull size is not.
Increasing skull size would also let you have much larger working memory, have multiple trains of thought but still with high interconnect, etc., which would let you work on problems that are too hard to fit in one normal human’s working memory.
I simply don’t buy the training data limit. You have infinite free training data from internal events, aka math.
More zoomed out, I still haven’t seen you argue why there aren’t more shifts that change the scaling equation. (I’ve listed some that I think would do so.)
The distinction is that without the initial 0-1 phase transition, none of the other stuff is possible. They are all instances of cumulative cultural accretion, whereas the transition constitutes entering the regime of cumulative cultural accretion (other biological organisms and extant AI systems are not in this regime). If I understand the author correctly, the creation of AGI will increase the pace of cumulative cultural accretion, but will not lead us (or them) to exit that regime (since, according to the point about universality, there is no further regime).
I think this answer also applies to the other comment you made, for what it’s worth. It would take me more time than I am willing to spend to make a cogent case for this here, so I will leave the discussion for now.
Innovations that unlock a broad swath of further abilities could be called “qualitatively more intelligent”. But 1. things that seem “narrow”, such as many math ideas, are qualitative increases in intelligence in this sense; and 2. there’s a lot of innovations that sure seem to obviously be qualitative increases.
It just means specific innovations that have especially big increases in intelligence. But I think that lots of innovations, such as mathematical ideas, have big increases in intelligence.
Okay, sure. If my impression of the original post is right, the author would not disagree with you, but would rather claim that there is an important distinction to be made among these innovations. Namely, one of them is the 0-1 transition to universality, and the others are not. So, do you disagree that such a distinction may be important at all, or merely that it is not a distinction that supports the argument made in the original post?
It would be a large, broad increase in intelligence. There may be other large broad increases in intelligence. I think there are also other large narrow increases, and small broad increases. Jacob seems to be claiming that there aren’t further large increases to be had. I think the transition to universality is pretty vague. Wouldn’t increasing memory capacity also be a sort of increase in universality?
I have to say I agree that there is vagueness in the transition to universality. That is hardly surprising seeing as it is a confusing and contentious subject that involves integrating perspectives on a number of other confusing and contentious subjects (language, biological evolution, cultural evolution, collective intelligence etc...). However, despite the vagueness, I personally still see this transition, from being unable to accrete cultural innovations to being able to do so, as a special one, different in kind from particular technologies that have been invented since.
Perhaps another way to put it is that the transition seems to bestow on us, as a collective, a meta-ability to obtain new abilities (or increased intelligence, as you put it), that we previously lacked. It is true that there are particular new abilities that are particularly valuable, but there may not be any further meta-abilities to obtain.
Just so we aren’t speaking past each other. Do you get what I am saying here? Even if you disagree that this is relevant, which may be reasonable, does the distinction I am driving at even make sense to you, or still not?
No, I don’t see a real distinction here. If you increase skull size, you increase the rate at which new abilities are invented and combined. If you come up with a mathematical idea, you advance a whole swath of ability-seeking searches. I listed some other things that increase meta-ability. What’s the distinction between various things that hit back to the meta-level?
There is an enormous difference between “increase skull size” when already well into diminishing returns for brain size given only 1e9s of training data, and an improvement that allows compressing knowledge, externalizing it, and sharing it permanently to train new minds.
After that cultural transition, each new mind can train on the compressed summary experiences of all previous minds of the tribe/nation/civilization. You go from having only 1e9s of training data that is thrown away when each individual dies, to having an effective training dataset that scales with total extant integrated population over time. It is a radical shift to a fundemental new scaling equation, and that is why it is a metasystems transition, whereas increasing skull size is not.
Increasing skull size would also let you have much larger working memory, have multiple trains of thought but still with high interconnect, etc., which would let you work on problems that are too hard to fit in one normal human’s working memory.
I simply don’t buy the training data limit. You have infinite free training data from internal events, aka math.
More zoomed out, I still haven’t seen you argue why there aren’t more shifts that change the scaling equation. (I’ve listed some that I think would do so.)
The distinction is that without the initial 0-1 phase transition, none of the other stuff is possible. They are all instances of cumulative cultural accretion, whereas the transition constitutes entering the regime of cumulative cultural accretion (other biological organisms and extant AI systems are not in this regime). If I understand the author correctly, the creation of AGI will increase the pace of cumulative cultural accretion, but will not lead us (or them) to exit that regime (since, according to the point about universality, there is no further regime).
I think this answer also applies to the other comment you made, for what it’s worth. It would take me more time than I am willing to spend to make a cogent case for this here, so I will leave the discussion for now.
Ok. I think you’re confused though; other things we’ve discussed are pretty much as 0 to 1 as cultural accumulation.
Innovations that unlock a broad swath of further abilities could be called “qualitatively more intelligent”. But 1. things that seem “narrow”, such as many math ideas, are qualitative increases in intelligence in this sense; and 2. there’s a lot of innovations that sure seem to obviously be qualitative increases.