Poke’s comment is interesting and I agree with his / her discussion of cultural evolution. But it also is possible to turn this point around to indicate a possible sweet spot in the fitness landscape that we are probably approaching. Conversely, however, I think the character of this sweet spot indicates scant likelihood of a very rapidly self-bootstrapping AGI.
Probably the most important and distinctive aspect of humans is our ability and desire to coordinate (express ourselves to others, imitate others, work with others, etc.). That ability and desire is required to engage in the sort of cultural evolution that Poke describes. It underlies the individual acquisition of language, cultural transmission, long term research programs, etc.
But as Eric Raymond points out, we are just good enough at this to make it work at all. A bunch of apes trying to coordinate world-wide culture, economy and research is a marginal proposition.
Furthermore we can observe that major creative works come from a very small number of people in “hot” communities—e.g. Florence during the Renaissance. As Paul Graham points out, this can’t be the result of a collection of uniquely talented individuals, it must be some function of the local cultural resources and incentives. Unfortunately I don’t know of any fine grained research on what these situations have in common—we probably don’t even have the right concepts to express those characteristics.
A mundane version of this is the amazing productivity of a “gelled team”, in software development and other areas. There is some interesting research on the fine grained correlates of team productivity but not much.
So I conjecture that there is a sweet spot for optimized “thinking systems” equivalent to highly productive human teams or larger groups.
Of course we already have such systems, combining humans and digital systems; the digital parts compensate for human limitations and decrease coordination costs in various ways, but they are still extremely weak—basically networked bookkeeping mechanisms of various sorts.
The natural direction of evolution here is that we improve the fit between the digital parts and the humans, tweak the environment to increase human effectiveness, and gradually increase the capabilities of the digital environment, until the human are no longer needed.
As described this is just incremental development. However it is self-accelerating; these systems are good tools for improving themselves. I expect we’ll see the usual sigmoid curve, where these “thinking systems” relatively quickly establish a new level, but then development slows down as they run into intrinsic limitations—though it is hard to predict what these will be, just as Ada Lovelace couldn’t predict the difficulties of massively parallel software design.
From here, we can see a sweet spot that is inhabited by systems with the abilities of “super teams”, perhaps with humans as components. In this scenario any super team emerges incrementally in a landscape with many other similar teams in various stages of development. Quite likely different teams will have different strengths and weaknesses. However nothing in this scenario gives us any reason to believe in super teams that can bootstrap themselves to virtual omniscience or omnipotence.
This development will also give us deep insight into how humans coordinate and how to facilitate and guide that coordination. This knowledge is likely to have very large consequences outside the development of the super teams.
Unfortunately, none of this thinking gives us much of a grip on the larger implications of moving to this sweet spot, just as Ada Lovelace (or Thomas Watson) didn’t anticipate the social implications of the computer, and Einstein and Leo Szilard didn’t anticipate the social implications of control over nuclear energy.
Poke’s comment is interesting and I agree with his / her discussion of cultural evolution. But it also is possible to turn this point around to indicate a possible sweet spot in the fitness landscape that we are probably approaching. Conversely, however, I think the character of this sweet spot indicates scant likelihood of a very rapidly self-bootstrapping AGI.
Probably the most important and distinctive aspect of humans is our ability and desire to coordinate (express ourselves to others, imitate others, work with others, etc.). That ability and desire is required to engage in the sort of cultural evolution that Poke describes. It underlies the individual acquisition of language, cultural transmission, long term research programs, etc.
But as Eric Raymond points out, we are just good enough at this to make it work at all. A bunch of apes trying to coordinate world-wide culture, economy and research is a marginal proposition.
Furthermore we can observe that major creative works come from a very small number of people in “hot” communities—e.g. Florence during the Renaissance. As Paul Graham points out, this can’t be the result of a collection of uniquely talented individuals, it must be some function of the local cultural resources and incentives. Unfortunately I don’t know of any fine grained research on what these situations have in common—we probably don’t even have the right concepts to express those characteristics.
A mundane version of this is the amazing productivity of a “gelled team”, in software development and other areas. There is some interesting research on the fine grained correlates of team productivity but not much.
So I conjecture that there is a sweet spot for optimized “thinking systems” equivalent to highly productive human teams or larger groups.
Of course we already have such systems, combining humans and digital systems; the digital parts compensate for human limitations and decrease coordination costs in various ways, but they are still extremely weak—basically networked bookkeeping mechanisms of various sorts.
The natural direction of evolution here is that we improve the fit between the digital parts and the humans, tweak the environment to increase human effectiveness, and gradually increase the capabilities of the digital environment, until the human are no longer needed.
As described this is just incremental development. However it is self-accelerating; these systems are good tools for improving themselves. I expect we’ll see the usual sigmoid curve, where these “thinking systems” relatively quickly establish a new level, but then development slows down as they run into intrinsic limitations—though it is hard to predict what these will be, just as Ada Lovelace couldn’t predict the difficulties of massively parallel software design.
From here, we can see a sweet spot that is inhabited by systems with the abilities of “super teams”, perhaps with humans as components. In this scenario any super team emerges incrementally in a landscape with many other similar teams in various stages of development. Quite likely different teams will have different strengths and weaknesses. However nothing in this scenario gives us any reason to believe in super teams that can bootstrap themselves to virtual omniscience or omnipotence.
This development will also give us deep insight into how humans coordinate and how to facilitate and guide that coordination. This knowledge is likely to have very large consequences outside the development of the super teams.
Unfortunately, none of this thinking gives us much of a grip on the larger implications of moving to this sweet spot, just as Ada Lovelace (or Thomas Watson) didn’t anticipate the social implications of the computer, and Einstein and Leo Szilard didn’t anticipate the social implications of control over nuclear energy.