evolution would love superintelligences whose utility function simply counts their instantiations! so of course evolution did not lack the motivation to keep going down the slide. it just got stuck there (for at least ten thousand human generations, possibly and counterfactually for much-much longer). moreover, non evolutionary AI’s also getting stuck on the slide (for years if not decades; median group folks would argue centuries) provides independent evidence that the slide is not too steep (though, like i said, there are many confounders in this model and little to no guarantees).
Evolution got stuck on the slide with humans because cultural evolution outcompeted biological evolution, because of cultural evolution’s ability to make immediate direct impacts on small tribes in hunter-gatherer environment within a few short generations (from the first chapter of Secret Of Our Success) and the high-order bit in biological evolution suddenly became “how efficient is cultural evolution”.
(Non evolutionary AIs don’t seem stuck on the slide at all to me.)
Yes, that particular argument seemed rather strange to me. “Ten thousand human generations” is a mere blip on an evolutionary time-scale; if anything, the fact that we now stand where we are, after a scant ten thousand generations, seems to me quite strong evidence that evolution fell into the pit, and we are the result of its fall. And, since evolution did not manage to solve the alignment problem before falling into the pit, we do not have a utility function that “counts our instantiations”; instead the things we value are significantly stranger and more complicated.
In fact, the whole analogy to evolution seems to me a near-exact match to the situation we find ourselves in, just with the relevant time-scales shrunken by several orders of magnitude. I see Paul’s argument that these two regimes are different as essentially a slightly reskinned version of the selection versus control distinction—but as I’m not convinced the distinction being pointed at is a real one, I’m likewise not reassured by Paul’s argument.
indeed, i even gave a talk almost a decade ago about the evolution:humans :: humans:AGI symmetry (see below)!
what confuses me though is that “is general reasoner” and “can support cultural evolution” properties seemed to emerge pretty much simultaneously in humans—a coincidence that requires its own explanation (or dissolution). furthermore, eliezer seems to think that the former property is much more important / discontinuity causing than the latter. and, indeed, outsized progress being made by individual human reasoners (scientists/inventors/etc.) seems to evidence such view.
what confuses me though is that “is general reasoner” and “can support cultural evolution” properties seemed to emerge pretty much simultaneously in humans—a coincidence that requires its own explanation (or dissolution).
David Deutsch (in The Beginning of Infinity) argues, as I recall, that they’re basically the same faculty. In order to copy someone else / “carry on a tradition”, you need to model what they’re doing (so that you can copy it), and similarly for originators to tell whether students are correctly carrying on the tradition. The main thing that’s interesting about his explanation is how he explains the development of general reasoning capacity, which we now think of as a tradition-breaking faculty, in the midst of tradition-promoting selection.
If you buy that story, it ends up being another example of treacherous turn from human history (where individual thinkers, operating faster than cultural evolution, started pursuing their own values).
I think that these properties encourage each other’s evolution. When you’re a more general reasoner, you have a bigger hypothesis space, specifying a hypothesis requires more information, so you also benefit more from transmitting information. Conversely, once you can transmit information, general reasoning becomes much more useful since you effectively have access to much bigger datasets.
If information is ‘transmitted’ by modified environments and conspecifics biasing individual search, marginal fitness returns on individual learning ability increase, while from the outside it looks just like ‘cultural ’evolution.″
Evolution got stuck on the slide with humans because cultural evolution outcompeted biological evolution, because of cultural evolution’s ability to make immediate direct impacts on small tribes in hunter-gatherer environment within a few short generations (from the first chapter of Secret Of Our Success) and the high-order bit in biological evolution suddenly became “how efficient is cultural evolution”.
(Non evolutionary AIs don’t seem stuck on the slide at all to me.)
Yes, that particular argument seemed rather strange to me. “Ten thousand human generations” is a mere blip on an evolutionary time-scale; if anything, the fact that we now stand where we are, after a scant ten thousand generations, seems to me quite strong evidence that evolution fell into the pit, and we are the result of its fall. And, since evolution did not manage to solve the alignment problem before falling into the pit, we do not have a utility function that “counts our instantiations”; instead the things we value are significantly stranger and more complicated.
In fact, the whole analogy to evolution seems to me a near-exact match to the situation we find ourselves in, just with the relevant time-scales shrunken by several orders of magnitude. I see Paul’s argument that these two regimes are different as essentially a slightly reskinned version of the selection versus control distinction—but as I’m not convinced the distinction being pointed at is a real one, I’m likewise not reassured by Paul’s argument.
indeed, i even gave a talk almost a decade ago about the evolution:humans :: humans:AGI symmetry (see below)!
what confuses me though is that “is general reasoner” and “can support cultural evolution” properties seemed to emerge pretty much simultaneously in humans—a coincidence that requires its own explanation (or dissolution). furthermore, eliezer seems to think that the former property is much more important / discontinuity causing than the latter. and, indeed, outsized progress being made by individual human reasoners (scientists/inventors/etc.) seems to evidence such view.
David Deutsch (in The Beginning of Infinity) argues, as I recall, that they’re basically the same faculty. In order to copy someone else / “carry on a tradition”, you need to model what they’re doing (so that you can copy it), and similarly for originators to tell whether students are correctly carrying on the tradition. The main thing that’s interesting about his explanation is how he explains the development of general reasoning capacity, which we now think of as a tradition-breaking faculty, in the midst of tradition-promoting selection.
If you buy that story, it ends up being another example of treacherous turn from human history (where individual thinkers, operating faster than cultural evolution, started pursuing their own values).
I think that these properties encourage each other’s evolution. When you’re a more general reasoner, you have a bigger hypothesis space, specifying a hypothesis requires more information, so you also benefit more from transmitting information. Conversely, once you can transmit information, general reasoning becomes much more useful since you effectively have access to much bigger datasets.
If information is ‘transmitted’ by modified environments and conspecifics biasing individual search, marginal fitness returns on individual learning ability increase, while from the outside it looks just like ‘cultural ’evolution.″