Clones of von Neumann, or anybody else for that matter, are just time-delayed twins. That is to say, you get embryos with (at best) the same genetics, but that have to be gestated, raised, educated, and form their own experiences throughout.
Maybe the average intelligence of adults who came from von Neumann embryos would be substantially greater than from the average population, such that the extremely intelligent ones occur a hundred times as often as usual. That’s probably the most optimistic outcome. Then again, maybe the radically different life experiences (such as being one of ten million in a prototype massive project on generation timescales, which historically do not go well) would make the upper extreme less likely.
So in the really great case, you raise ten million clones that are 100x more likely than average to be top tier in intelligence. But even doing nothing, you are likely to get just as many in a few years from the general population. To substantially increase the fraction of extremely smart people, you’re going to need a bigger multiplier than a hundred for top-tier intelligence, or raise something like a hundred million clones, more than the population of most nations.
That’s after you get past the difficulty of creating viable clone embryos at all, and finding enough women or creating good enough artificial wombs to gestate them all (without doing anything that might harm their chances of being extremely smart in adulthood), and then raising them in environments conducive to becoming intelligent. Just the initial experiments to establish viability of the idea would involve timescales on the order of a generation or two, even if there were no other practical or ethical considerations.
So all of this looks possible, but not at all likely to be effective for preventing AI catastrophe or anything else that might happen within a hundred years.
Maybe the average intelligence of adults who came from von Neumann embryos would be substantially greater than from the average population, such that the extremely intelligent ones occur a hundred times as often as usual. That’s probably the most optimistic outcome.
I think this is actually a quite pessimistic outcome and that IQ is much more heritable (and genetically determined) than this assumes.
I’m sure that IQ, or really some more specific capability that helps with AI alignment research, could indeed be highly heritable. In the long run, much greater than 100x prevalence multipliers should be easily achievable.
I very strongly doubt that the first generation experiment will get anywhere near the theoretical bounds of heritability. Getting 100x the proportion of supergeniuses in the first generation is flagrantly optimistic enough for me.
Having two generations to play with is pretty optimistic in my view, as it corresponds to having them start to work productively on AGI safety around 2100. While still plausible, I expect the problem to be either solved by then or obviated by civilization failure one way or another.
Clones of von Neumann, or anybody else for that matter, are just time-delayed twins. That is to say, you get embryos with (at best) the same genetics, but that have to be gestated, raised, educated, and form their own experiences throughout.
Maybe the average intelligence of adults who came from von Neumann embryos would be substantially greater than from the average population, such that the extremely intelligent ones occur a hundred times as often as usual. That’s probably the most optimistic outcome. Then again, maybe the radically different life experiences (such as being one of ten million in a prototype massive project on generation timescales, which historically do not go well) would make the upper extreme less likely.
So in the really great case, you raise ten million clones that are 100x more likely than average to be top tier in intelligence. But even doing nothing, you are likely to get just as many in a few years from the general population. To substantially increase the fraction of extremely smart people, you’re going to need a bigger multiplier than a hundred for top-tier intelligence, or raise something like a hundred million clones, more than the population of most nations.
That’s after you get past the difficulty of creating viable clone embryos at all, and finding enough women or creating good enough artificial wombs to gestate them all (without doing anything that might harm their chances of being extremely smart in adulthood), and then raising them in environments conducive to becoming intelligent. Just the initial experiments to establish viability of the idea would involve timescales on the order of a generation or two, even if there were no other practical or ethical considerations.
So all of this looks possible, but not at all likely to be effective for preventing AI catastrophe or anything else that might happen within a hundred years.
I think this is actually a quite pessimistic outcome and that IQ is much more heritable (and genetically determined) than this assumes.
I’m sure that IQ, or really some more specific capability that helps with AI alignment research, could indeed be highly heritable. In the long run, much greater than 100x prevalence multipliers should be easily achievable.
I very strongly doubt that the first generation experiment will get anywhere near the theoretical bounds of heritability. Getting 100x the proportion of supergeniuses in the first generation is flagrantly optimistic enough for me.
Having two generations to play with is pretty optimistic in my view, as it corresponds to having them start to work productively on AGI safety around 2100. While still plausible, I expect the problem to be either solved by then or obviated by civilization failure one way or another.