OK, I got two minuses already, can’t say I’m surprised because what I wrote is not politically correct, and probably some of you thought that I broke the “politics is the mind-killer” informal rule (which is not really rational if you happen to believe that the default political position—the one most likely to pass under the radar as non-mindkilling—is not static, but in fact is constantly shifting, usually in a leftwards direction).
For the sake of all rationalists, I hope I was downvoted because of the latter. Otherwise, all hope for rational argument is lost, if even people in the rationalist community adopt thought processes more similar to those of politicians (i.e., demotism) than true scientists.
The unfortunate fact is that you cannot separate the speed of scientific progress from public policy or the particular structure of the society engaged in science. Science is not some abstract ideal, it is the triumph of the human mind, of the still-rare people possessing both intelligence and rationality (the latter may even be restricted only to their area of expertise, see Abdus Salam or Georges Lemaître). Humans are inherently political animals. The quality of science depends directly, first and foremost, on the number and quality of minds performing it, and some political positions happen to be ways to increase that number more than others. Simply ignoring the connection is not an option if you really believe in the promise of science to help improve the lives of every human being no matter his IQ or mental profile (like I do).
If you downvote me, I have one request: I would at least like to read why.
Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai’s post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.
I think your comment was relatively ill-received because:
1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).
2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.
3) There was a vibe of “grr, look at that oppressive taboo!” or “Hear me, O naive ideologically-blinkered folks!” That signals to some extent that one is in a “color war” mood, or attracted to the ideological high of striking for one’s views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity’s prospects.
4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it’s a bit annoying to have them presented as some kind of secret knock-down argument.
I didn’t downvote you, but I can see why someone reasonably might. Off the top of my head, in no particular order:
Whole brain emulation isn’t the consensus best path to general AI. My intuition agrees with yours here, but you don’t show any sign that you understand the subtleties involved well enough to be as certain as you are.
Lots of problematic unsupported assertions, e.g. “intelligent people generally have less children than those on the left half of the Bell curve”, “[rich people] are also more likely to have above-average IQs, else they wouldn’t be rich”, and “[violence and docility are] in the genes and the brain that they produce”.
Eugenics!?!
Ok, fine, eugenics, let’s talk about it. Your discussion is naive: you assume that IQ is the right metric to optimize for (see Raising the Sanity Waterline for another perspective), you assume that we can measure it accurately enough to produce the effect you want, you assume that it will go on being an effective metric even after we start conditioning reproductive success on it, and your policy prescriptions are socially inept even by LW standards.
Also, it’s really slow. That seems ok to you because you don’t believe that we’ll otherwise have recursive self-improvement in our lifetimes, but that’s not the consensus view here either.
I’m not interested in debating any of this, I just wanted to give you an outside perspective on your own writing. I hope it helps, and I hope you decide to stick around.
OK, I got two minuses already, can’t say I’m surprised because what I wrote is not politically correct, and probably some of you thought that I broke the “politics is the mind-killer” informal rule (which is not really rational if you happen to believe that the default political position—the one most likely to pass under the radar as non-mindkilling—is not static, but in fact is constantly shifting, usually in a leftwards direction).
For the sake of all rationalists, I hope I was downvoted because of the latter. Otherwise, all hope for rational argument is lost, if even people in the rationalist community adopt thought processes more similar to those of politicians (i.e., demotism) than true scientists.
The unfortunate fact is that you cannot separate the speed of scientific progress from public policy or the particular structure of the society engaged in science. Science is not some abstract ideal, it is the triumph of the human mind, of the still-rare people possessing both intelligence and rationality (the latter may even be restricted only to their area of expertise, see Abdus Salam or Georges Lemaître). Humans are inherently political animals. The quality of science depends directly, first and foremost, on the number and quality of minds performing it, and some political positions happen to be ways to increase that number more than others. Simply ignoring the connection is not an option if you really believe in the promise of science to help improve the lives of every human being no matter his IQ or mental profile (like I do).
If you downvote me, I have one request: I would at least like to read why.
Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai’s post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.
I think your comment was relatively ill-received because:
1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).
2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.
3) There was a vibe of “grr, look at that oppressive taboo!” or “Hear me, O naive ideologically-blinkered folks!” That signals to some extent that one is in a “color war” mood, or attracted to the ideological high of striking for one’s views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity’s prospects.
4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it’s a bit annoying to have them presented as some kind of secret knock-down argument.
I didn’t downvote you, but I can see why someone reasonably might. Off the top of my head, in no particular order:
Whole brain emulation isn’t the consensus best path to general AI. My intuition agrees with yours here, but you don’t show any sign that you understand the subtleties involved well enough to be as certain as you are.
Lots of problematic unsupported assertions, e.g. “intelligent people generally have less children than those on the left half of the Bell curve”, “[rich people] are also more likely to have above-average IQs, else they wouldn’t be rich”, and “[violence and docility are] in the genes and the brain that they produce”.
Eugenics!?!
Ok, fine, eugenics, let’s talk about it. Your discussion is naive: you assume that IQ is the right metric to optimize for (see Raising the Sanity Waterline for another perspective), you assume that we can measure it accurately enough to produce the effect you want, you assume that it will go on being an effective metric even after we start conditioning reproductive success on it, and your policy prescriptions are socially inept even by LW standards.
Also, it’s really slow. That seems ok to you because you don’t believe that we’ll otherwise have recursive self-improvement in our lifetimes, but that’s not the consensus view here either.
I’m not interested in debating any of this, I just wanted to give you an outside perspective on your own writing. I hope it helps, and I hope you decide to stick around.