I’m sure that IQ, or really some more specific capability that helps with AI alignment research, could indeed be highly heritable. In the long run, much greater than 100x prevalence multipliers should be easily achievable.
I very strongly doubt that the first generation experiment will get anywhere near the theoretical bounds of heritability. Getting 100x the proportion of supergeniuses in the first generation is flagrantly optimistic enough for me.
Having two generations to play with is pretty optimistic in my view, as it corresponds to having them start to work productively on AGI safety around 2100. While still plausible, I expect the problem to be either solved by then or obviated by civilization failure one way or another.
I’m sure that IQ, or really some more specific capability that helps with AI alignment research, could indeed be highly heritable. In the long run, much greater than 100x prevalence multipliers should be easily achievable.
I very strongly doubt that the first generation experiment will get anywhere near the theoretical bounds of heritability. Getting 100x the proportion of supergeniuses in the first generation is flagrantly optimistic enough for me.
Having two generations to play with is pretty optimistic in my view, as it corresponds to having them start to work productively on AGI safety around 2100. While still plausible, I expect the problem to be either solved by then or obviated by civilization failure one way or another.