If you’re doing steepest descent or evolution, it’s probably more likely that you’d be going over cliffs than if you were just travelling in random directions, since these methods will be looking for directions of greatest change. Therefore, you’d be more likely to see much faster change in your loss metric if you are optimizing for it directly than if you were optimizing for it indirectly or for something else entirely.
On Paul’s model, evolution was not really optimising for general intelligence at all and then as selection pressures shifted, evolution suddenly became interested in general intelligence and started (for the first time ever) optimising for it quite strongly. So in other words, we shouldn’t interpret the discontinuous jump in history as evidence of any discontinuity in the landscape, because evolution was just wandering randomly around the landscape until very recently.
If you follow that picture, and ignore everything before hominid evolution (or homo sapiens evolution) as irrelevant as evolution wasn’t optimising for general intelligence, then the more recent evolutionary story could just be continuous, but fast on an evolutionary timescale.
So, it’s not that evolution was indirectly optimising for intelligence ‘all along’, but somehow we expect things to be different when we directly optimise for intelligence in AI development. Instead, it’s that evolution started optimising for intelligence at all very late in evolutionary history (and did so fairly indirectly), and all we can say is that when evolution started optimising for intelligence it made fairly fast progress. Hence my conclusion was that Evolution just doesn’t tell us much one way or the other about AI development. Which, in the end, means I end up agreeing with your conclusion.
In conclusion, I think this leaves us in a position where trying to determine what kind of rate of progress in AI development we will see depends almost entirely on our “inside view” perspective. That is, our knowledge about intelligence itself, rather than our observations about how progress in intelligence has occurred in different situations. However, this “outside view” perspective gives us small but positive evidence in favor of discontinuous change. And this might be pretty much where we started out in this debate.
As to how to actually do the ‘inside view’ with any kind of reliability—I made a suggestion in my post that we try to enumerate the developments we expect to be necessary to get to AGI and whether any of them seem likely to produce a discontinuity if discovered.
Stuart Russell provided a list of these capacities in Human Compatible.
human-like language comprehension
cumulative learning
discovering new action sets
managing its own mental activity
For reference, I’ve included two capabilities we already have that I imagine being on a similar list in 1960
On Paul’s model, evolution was not really optimising for general intelligence at all and then as selection pressures shifted, evolution suddenly became interested in general intelligence and started (for the first time ever) optimising for it quite strongly. So in other words, we shouldn’t interpret the discontinuous jump in history as evidence of any discontinuity in the landscape, because evolution was just wandering randomly around the landscape until very recently.
If you follow that picture, and ignore everything before hominid evolution (or homo sapiens evolution) as irrelevant as evolution wasn’t optimising for general intelligence, then the more recent evolutionary story could just be continuous, but fast on an evolutionary timescale.
So, it’s not that evolution was indirectly optimising for intelligence ‘all along’, but somehow we expect things to be different when we directly optimise for intelligence in AI development. Instead, it’s that evolution started optimising for intelligence at all very late in evolutionary history (and did so fairly indirectly), and all we can say is that when evolution started optimising for intelligence it made fairly fast progress. Hence my conclusion was that Evolution just doesn’t tell us much one way or the other about AI development. Which, in the end, means I end up agreeing with your conclusion.
As to how to actually do the ‘inside view’ with any kind of reliability—I made a suggestion in my post that we try to enumerate the developments we expect to be necessary to get to AGI and whether any of them seem likely to produce a discontinuity if discovered.