Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai’s post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.
I think your comment was relatively ill-received because:
1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).
2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.
3) There was a vibe of “grr, look at that oppressive taboo!” or “Hear me, O naive ideologically-blinkered folks!” That signals to some extent that one is in a “color war” mood, or attracted to the ideological high of striking for one’s views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity’s prospects.
4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it’s a bit annoying to have them presented as some kind of secret knock-down argument.
Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai’s post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.
I think your comment was relatively ill-received because:
1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).
2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.
3) There was a vibe of “grr, look at that oppressive taboo!” or “Hear me, O naive ideologically-blinkered folks!” That signals to some extent that one is in a “color war” mood, or attracted to the ideological high of striking for one’s views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity’s prospects.
4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it’s a bit annoying to have them presented as some kind of secret knock-down argument.