The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy—without experimental feedback. Aristotle famously got it wrong when he deduced philosophically that rocks fall faster than feathers.
Also, I believe that it is a pointless endeavor for now. Here are 2 reasons why I think that’s the case.
*1. We humans don’t have any idea whatsoever as to what constitutes the essence of an intelligent system. Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of—the human brain—and simply replicate it in an artificial manner. This is a far easier task than designing an intelligence from scratch, since in this case the part of design was already done by natural (and sexual) selection.
Our best hope and easiest path for AI is simply to replicate the human brain (preferably the brain of an intelligent and docile human being), and make a body suitable for it to inhabit. Henry Markram is working on this (hopefully he will use himself or someone like himself for the first template—instead of some stupid or deranged human), and he notably hasn’t been terribly concerned with Friendly AI. Ask yourself this: what makes for FH (Friendly Humans)? And here we turn to neuroscience, evo-psych and… the thing that some people want to avoid discussing for fear of making others uncomfortable: HBD. People of higher average IQ are, on average, less predisposed to violence. Inbred populations are more predisposed to clannish behavior (we would ideally want an AI that is the opposite of that, that is most willing to be tolerant of out-groups). Some populations of human beings are more predisposed to violence, while some have a reputation for docility (you can see that in the crime rates). It’s in the genes and the brain that they produce, combined with some environmental factors like random mutations, the way proteins fold and are expressed, etc.
So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.
*2. We might not be smart or creative enough on average to be able to build a FAI, or it might take too long a time to do so. This is a problem that, if exists, will not only not go away, but actually compound itself. As long as there are no restrictions whatsoever on reproduction and some form of welfarism and socialism exists in most nations on Earth, there will be dysgenics with regards to intelligence—since intelligent people generally have less children than those on the left half of the Bell curve—while the latter are basically subsidized to reproduce by means of wealth transfer from the rich (who are also more likely to have above-average IQs, else they wouldn’t be rich).
Even if we do possess the knowledge to replicate the human brain, I believe it is highly unlikely that it will happen in a single generation. AI (friendly or not) is NOT just around the corner. Humanity doesn’t even possess the ability to write a bugless operating system, or build a computer that obeys sane laws of personal computing. What’s worse, it did possess the ability to built something reasonably close to these ideals, but that ability is lost today. If building FAI takes more than one generation, and the survival of billions of people depends on it, then we should rather have it sooner rather than later.
The current bottleneck with AI and most science in general is with the number of human minds able and willing to do it. Without the ability to mass-produce at least human-level AI, we simply desperately need to maximize the proportion of intelligent and conscientious human beings, by producing as many of them as possible. The sad truth is this: one Einstein or Feinman is more valuable when it comes to the continued well-being of humanity than 99% of the rest of human beings who are simply incapable of producing such high-level work and thought because of either genetics and environmental factors, i.e. conditions in the uterus, enough iodine, etc. The higher the average intelligence of humanity, the more science thrives.
Eugenics for intelligence is the obvious answer. This can be achieved through various means, discussed in this very good post on West Hunter. Just one example, which is one of the slowest but the one advanced nations are 100% capable of doing right now: advanced nations already possess the means to create embryos using the sperm and eggs of the best and brightest of scientists alive today. If our leaders simply conditioned welfare and even payments of large sums of money for the below-average IQ women on them acting as surrogate mothers for “genius” embryos, in 20-30 years we could have dozens of Feynmans and tens of thousands of Yudkowskys working on AI. This would have the added benefit on keeping the low-IQ mothers otherwise pregnant and unavailable for spreading low-IQ genes to the next generation, which would result in less people who are a net drain on the future society and would cause only time-consuming problems for the genius kids (like stealing their possessions or engaging in other criminal activities).
I do realize that increasing intelligence in this manner is bound to have an upper limit and, furthermore, will have some other drawbacks. The high incidence of Tay-Sachs disease among the 110 average IQ Ashkenazi Jews is an illustration of this. But I believe that the discoveries of the healthy high IQ people have the potential to provide more hedons than the dolors of the Tay-Sachs sufferers (or other afflictions of high-IQ people, including some less serious ones like myopia).
EDIT: Given the above, especially if *2. is indeed the case, it is not unreasonable to believe that donating to AmRen or Steve Sailer has greater utility than donating to SIAI. I believe that the brainpower at SIAI is better spent on a problem that is almost as difficult as FAI, namely making HBD acceptable discourse in the scientific and political circles (preferably without telling people who wouldn’t fully grasp it and would instead use it as justification for hatred towards Blacks), and specifically peaceful, non-violent eugenics for intelligence as a policy for the improvement of human societies over time.
I concede that, under some really extreme environmental conditions, any genetic advantages would be canceled out. So, you might actually be right if the IQ 80 mother is really bad. Money should be provided to poor families by the state, but only as long as they raise their child well—as determined by periodic medical checks. Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.
But I believe you are taking the importance of parenthood way too far, and disregarding the hereditarian point of view too easily. The blank-slate bias is something to be avoided. I would suggest you read this article by Matt Ridley.
Excerpt:
Today, a third of a century after the study began and with other studies of reunited twins having reached the same conclusion, the numbers are striking. Monozygotic twins raised apart are more similar in IQ (74%) than dizygotic (fraternal) twins raised together (60%) and much more than parent-children pairs (42%); half-siblings (31%); adoptive siblings (29%-34%); virtual twins, or similarly aged but unrelated children raised together (28%); adoptive parent-child pairs (19%) and cousins (15%). Nothing but genes can explain this hierarchy.
IQ, sure. What he does with it? That’s another story. I shudder to think what a Feynman could have done in service of some strict agenda he’d been trained into.
Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.
This statement is obviously false and obviously falsifiable.
Insert example of vegetative-state life-support cripple “raising a child” (AKA not actually doing anything and having an effective/apparent IQ of ~0, perhaps even dying as soon as the child touches something they weren’t supposed to).
At this point, a rock would be just as good at raising a child. At least the child can use the rock to kill a small animal and eat it.
Is your question/objection rhetorical, or did you just not understand the A Human’s Guide to Words sequence?
Taboo “person”, and if that doesn’t work, taboo “raise children”, and if that still doesn’t work, taboo “no matter the IQ” or “can do” or “reasonably well” or even the entire list of symbols that is generating the confusion.
I objected and gave a thought experiment to illustrate the falsifiability of one specific assertion, which can be nothing else than what I believed you meant by that list of symbols, based on my prior beliefs on what the symbols represented in empirical conceptspace.
If you question my objection on the grounds of using a symbol incorrectly, then you should question the symbol usage, not the objection as a whole through a straw-manned assertion built with your different version of the symbol.
The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy—without experimental feedback
I do not understand how this has anything to do with FAI
Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of—the human brain—and simply replicate it in an artificial manner.
This is not in fact “simple” to do. It’s not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?
So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.
Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.
With regards to your claims regarding HBD, eugenics, etc: Evolution is a lot weaker than you think it is, and we know a lot less about genetic influence on intelligence than you seem to think. (See eg here or here.) Such a program would be incredibly difficult to get implemented, and so is probably not worth it.
I do not understand how this has anything to do with FAI
It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.
This is not in fact “simple” to do. It’s not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?
Are there any other current proposals to build AGI that don’t start from the brain? From what I can tell, people don’t even know where to begin with those.
Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.
At some point you have to settle for “good enough” and “friendly enough”. Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.
(like ensuring that value systems remain unchanged during self-modification)
But what if the AI is programmed with a faulty value system by its human creators?
Such a program would be incredibly difficult to get implemented, and so is probably not worth it.
Fair enough, I was giving it as an example because it is possible to implement now—at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.
Before you build a new crop of them, first you should probably make sure society is even listening to its Einsteins and Feynmans, or that the ones you have are even interested in solving these problems. It does no good to create a crop of supergeniuses who aren’t interested in solving your problems for you and wouldn’t be listened to if they did.
The society will be listening to its Einsteins and Feynmans once they band together and figure out how to use the dark arts to take control of the mass-media and universities away from their present owners and use them for their own, more enlightened goals. Or at least ingratiate themselves before the current rulers. They could promise to build new bombs or drones, for example. As for not being interested in solving FAI and these kinds of problems, that’s really not a very convincing argument IMO. Throughout history, in societies of high average IQ and a culture tolerant of science, there was never a shortage of people curious about the world. Why wouldn’t people with stratospheric IQ be curious about the world and enjoy the challenge of science, especially if they live in a brain-dead society which routinely engages in easy and boring trivialities? I mean, what would you choose between working on FAI or watching the Kardashians? I know what I would, even though my IQ is not very much above average and I’m really bad at probability problems.
There will never be a shortage of nerds and Asperger types out there, at least not for a long time, even with the current dysgenic trends.
OK, I got two minuses already, can’t say I’m surprised because what I wrote is not politically correct, and probably some of you thought that I broke the “politics is the mind-killer” informal rule (which is not really rational if you happen to believe that the default political position—the one most likely to pass under the radar as non-mindkilling—is not static, but in fact is constantly shifting, usually in a leftwards direction).
For the sake of all rationalists, I hope I was downvoted because of the latter. Otherwise, all hope for rational argument is lost, if even people in the rationalist community adopt thought processes more similar to those of politicians (i.e., demotism) than true scientists.
The unfortunate fact is that you cannot separate the speed of scientific progress from public policy or the particular structure of the society engaged in science. Science is not some abstract ideal, it is the triumph of the human mind, of the still-rare people possessing both intelligence and rationality (the latter may even be restricted only to their area of expertise, see Abdus Salam or Georges Lemaître). Humans are inherently political animals. The quality of science depends directly, first and foremost, on the number and quality of minds performing it, and some political positions happen to be ways to increase that number more than others. Simply ignoring the connection is not an option if you really believe in the promise of science to help improve the lives of every human being no matter his IQ or mental profile (like I do).
If you downvote me, I have one request: I would at least like to read why.
Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai’s post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.
I think your comment was relatively ill-received because:
1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).
2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.
3) There was a vibe of “grr, look at that oppressive taboo!” or “Hear me, O naive ideologically-blinkered folks!” That signals to some extent that one is in a “color war” mood, or attracted to the ideological high of striking for one’s views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity’s prospects.
4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it’s a bit annoying to have them presented as some kind of secret knock-down argument.
I didn’t downvote you, but I can see why someone reasonably might. Off the top of my head, in no particular order:
Whole brain emulation isn’t the consensus best path to general AI. My intuition agrees with yours here, but you don’t show any sign that you understand the subtleties involved well enough to be as certain as you are.
Lots of problematic unsupported assertions, e.g. “intelligent people generally have less children than those on the left half of the Bell curve”, “[rich people] are also more likely to have above-average IQs, else they wouldn’t be rich”, and “[violence and docility are] in the genes and the brain that they produce”.
Eugenics!?!
Ok, fine, eugenics, let’s talk about it. Your discussion is naive: you assume that IQ is the right metric to optimize for (see Raising the Sanity Waterline for another perspective), you assume that we can measure it accurately enough to produce the effect you want, you assume that it will go on being an effective metric even after we start conditioning reproductive success on it, and your policy prescriptions are socially inept even by LW standards.
Also, it’s really slow. That seems ok to you because you don’t believe that we’ll otherwise have recursive self-improvement in our lifetimes, but that’s not the consensus view here either.
I’m not interested in debating any of this, I just wanted to give you an outside perspective on your own writing. I hope it helps, and I hope you decide to stick around.
The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy—without experimental feedback. Aristotle famously got it wrong when he deduced philosophically that rocks fall faster than feathers.
Also, I believe that it is a pointless endeavor for now. Here are 2 reasons why I think that’s the case.
*1. We humans don’t have any idea whatsoever as to what constitutes the essence of an intelligent system. Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of—the human brain—and simply replicate it in an artificial manner. This is a far easier task than designing an intelligence from scratch, since in this case the part of design was already done by natural (and sexual) selection.
Our best hope and easiest path for AI is simply to replicate the human brain (preferably the brain of an intelligent and docile human being), and make a body suitable for it to inhabit. Henry Markram is working on this (hopefully he will use himself or someone like himself for the first template—instead of some stupid or deranged human), and he notably hasn’t been terribly concerned with Friendly AI. Ask yourself this: what makes for FH (Friendly Humans)? And here we turn to neuroscience, evo-psych and… the thing that some people want to avoid discussing for fear of making others uncomfortable: HBD. People of higher average IQ are, on average, less predisposed to violence. Inbred populations are more predisposed to clannish behavior (we would ideally want an AI that is the opposite of that, that is most willing to be tolerant of out-groups). Some populations of human beings are more predisposed to violence, while some have a reputation for docility (you can see that in the crime rates). It’s in the genes and the brain that they produce, combined with some environmental factors like random mutations, the way proteins fold and are expressed, etc.
So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.
*2. We might not be smart or creative enough on average to be able to build a FAI, or it might take too long a time to do so. This is a problem that, if exists, will not only not go away, but actually compound itself. As long as there are no restrictions whatsoever on reproduction and some form of welfarism and socialism exists in most nations on Earth, there will be dysgenics with regards to intelligence—since intelligent people generally have less children than those on the left half of the Bell curve—while the latter are basically subsidized to reproduce by means of wealth transfer from the rich (who are also more likely to have above-average IQs, else they wouldn’t be rich).
Even if we do possess the knowledge to replicate the human brain, I believe it is highly unlikely that it will happen in a single generation. AI (friendly or not) is NOT just around the corner. Humanity doesn’t even possess the ability to write a bugless operating system, or build a computer that obeys sane laws of personal computing. What’s worse, it did possess the ability to built something reasonably close to these ideals, but that ability is lost today. If building FAI takes more than one generation, and the survival of billions of people depends on it, then we should rather have it sooner rather than later.
The current bottleneck with AI and most science in general is with the number of human minds able and willing to do it. Without the ability to mass-produce at least human-level AI, we simply desperately need to maximize the proportion of intelligent and conscientious human beings, by producing as many of them as possible. The sad truth is this: one Einstein or Feinman is more valuable when it comes to the continued well-being of humanity than 99% of the rest of human beings who are simply incapable of producing such high-level work and thought because of either genetics and environmental factors, i.e. conditions in the uterus, enough iodine, etc. The higher the average intelligence of humanity, the more science thrives.
Eugenics for intelligence is the obvious answer. This can be achieved through various means, discussed in this very good post on West Hunter. Just one example, which is one of the slowest but the one advanced nations are 100% capable of doing right now: advanced nations already possess the means to create embryos using the sperm and eggs of the best and brightest of scientists alive today. If our leaders simply conditioned welfare and even payments of large sums of money for the below-average IQ women on them acting as surrogate mothers for “genius” embryos, in 20-30 years we could have dozens of Feynmans and tens of thousands of Yudkowskys working on AI. This would have the added benefit on keeping the low-IQ mothers otherwise pregnant and unavailable for spreading low-IQ genes to the next generation, which would result in less people who are a net drain on the future society and would cause only time-consuming problems for the genius kids (like stealing their possessions or engaging in other criminal activities).
I do realize that increasing intelligence in this manner is bound to have an upper limit and, furthermore, will have some other drawbacks. The high incidence of Tay-Sachs disease among the 110 average IQ Ashkenazi Jews is an illustration of this. But I believe that the discoveries of the healthy high IQ people have the potential to provide more hedons than the dolors of the Tay-Sachs sufferers (or other afflictions of high-IQ people, including some less serious ones like myopia).
EDIT: Given the above, especially if *2. is indeed the case, it is not unreasonable to believe that donating to AmRen or Steve Sailer has greater utility than donating to SIAI. I believe that the brainpower at SIAI is better spent on a problem that is almost as difficult as FAI, namely making HBD acceptable discourse in the scientific and political circles (preferably without telling people who wouldn’t fully grasp it and would instead use it as justification for hatred towards Blacks), and specifically peaceful, non-violent eugenics for intelligence as a policy for the improvement of human societies over time.
A Feynman raised by an 80 IQ mother… wouldn’t be Feynman
Judith Rich Harris might disagree.
I concede that, under some really extreme environmental conditions, any genetic advantages would be canceled out. So, you might actually be right if the IQ 80 mother is really bad. Money should be provided to poor families by the state, but only as long as they raise their child well—as determined by periodic medical checks. Any person, no matter the IQ, can do one thing reasonably well, and that is to raise children to maturity.
But I believe you are taking the importance of parenthood way too far, and disregarding the hereditarian point of view too easily. The blank-slate bias is something to be avoided. I would suggest you read this article by Matt Ridley.
Excerpt:
IQ, sure. What he does with it? That’s another story. I shudder to think what a Feynman could have done in service of some strict agenda he’d been trained into.
This statement is obviously false and obviously falsifiable.
Insert example of vegetative-state life-support cripple “raising a child” (AKA not actually doing anything and having an effective/apparent IQ of ~0, perhaps even dying as soon as the child touches something they weren’t supposed to).
At this point, a rock would be just as good at raising a child. At least the child can use the rock to kill a small animal and eat it.
Is a “vegetative-state life-support cripple” a person at all?
Is your question/objection rhetorical, or did you just not understand the A Human’s Guide to Words sequence?
Taboo “person”, and if that doesn’t work, taboo “raise children”, and if that still doesn’t work, taboo “no matter the IQ” or “can do” or “reasonably well” or even the entire list of symbols that is generating the confusion.
I objected and gave a thought experiment to illustrate the falsifiability of one specific assertion, which can be nothing else than what I believed you meant by that list of symbols, based on my prior beliefs on what the symbols represented in empirical conceptspace.
If you question my objection on the grounds of using a symbol incorrectly, then you should question the symbol usage, not the objection as a whole through a straw-manned assertion built with your different version of the symbol.
I do not understand how this has anything to do with FAI
This is not in fact “simple” to do. It’s not even clear what level of details will be needed- just a neural network? Hormones? Glial cells? Modelling of the actual neurons?
Are you sure you understand what FAI actually refers to? In particular, with p~~1, no living human qualifies as Friendly; even if they did, we would still need to solve several open problems also needed for FAI (like ensuring that value systems remain unchanged during self-modification) for a Friendly Upload to remain Friendly.
With regards to your claims regarding HBD, eugenics, etc: Evolution is a lot weaker than you think it is, and we know a lot less about genetic influence on intelligence than you seem to think. (See eg here or here.) Such a program would be incredibly difficult to get implemented, and so is probably not worth it.
It has to do because FAI is currently a branch of pure philosophy. Without constant experimental feedback and contact with reality, philosophy simply cannot deliver useful results like science can.
Are there any other current proposals to build AGI that don’t start from the brain? From what I can tell, people don’t even know where to begin with those.
At some point you have to settle for “good enough” and “friendly enough”. Keep in mind that simply stalling AI until you have your perfect FAI philosophy in place may have a serious cost in terms of human lives lost due to inaction.
But what if the AI is programmed with a faulty value system by its human creators?
Fair enough, I was giving it as an example because it is possible to implement now—at least technically, though obviously not politically. Things like genome repair seem more distant in time. Cloning brilliant scientists seems like a better course of action in the long run, and without so many controversies. However, this would still leave the problem of what to do with those who are genetically more prone to violence, who are a net drag on society.
Before you build a new crop of them, first you should probably make sure society is even listening to its Einsteins and Feynmans, or that the ones you have are even interested in solving these problems. It does no good to create a crop of supergeniuses who aren’t interested in solving your problems for you and wouldn’t be listened to if they did.
The society will be listening to its Einsteins and Feynmans once they band together and figure out how to use the dark arts to take control of the mass-media and universities away from their present owners and use them for their own, more enlightened goals. Or at least ingratiate themselves before the current rulers. They could promise to build new bombs or drones, for example. As for not being interested in solving FAI and these kinds of problems, that’s really not a very convincing argument IMO. Throughout history, in societies of high average IQ and a culture tolerant of science, there was never a shortage of people curious about the world. Why wouldn’t people with stratospheric IQ be curious about the world and enjoy the challenge of science, especially if they live in a brain-dead society which routinely engages in easy and boring trivialities? I mean, what would you choose between working on FAI or watching the Kardashians? I know what I would, even though my IQ is not very much above average and I’m really bad at probability problems.
There will never be a shortage of nerds and Asperger types out there, at least not for a long time, even with the current dysgenic trends.
You assume they’d want to band together, and you also underestimate modern entertainment; Dwarf Fortress, for example.
You also assume they’d care to -share- the products of their curiosity with a brain-dead society.
I upvoted you for responding with a refutation and not simply downvoting.
OK, I got two minuses already, can’t say I’m surprised because what I wrote is not politically correct, and probably some of you thought that I broke the “politics is the mind-killer” informal rule (which is not really rational if you happen to believe that the default political position—the one most likely to pass under the radar as non-mindkilling—is not static, but in fact is constantly shifting, usually in a leftwards direction).
For the sake of all rationalists, I hope I was downvoted because of the latter. Otherwise, all hope for rational argument is lost, if even people in the rationalist community adopt thought processes more similar to those of politicians (i.e., demotism) than true scientists.
The unfortunate fact is that you cannot separate the speed of scientific progress from public policy or the particular structure of the society engaged in science. Science is not some abstract ideal, it is the triumph of the human mind, of the still-rare people possessing both intelligence and rationality (the latter may even be restricted only to their area of expertise, see Abdus Salam or Georges Lemaître). Humans are inherently political animals. The quality of science depends directly, first and foremost, on the number and quality of minds performing it, and some political positions happen to be ways to increase that number more than others. Simply ignoring the connection is not an option if you really believe in the promise of science to help improve the lives of every human being no matter his IQ or mental profile (like I do).
If you downvote me, I have one request: I would at least like to read why.
Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai’s post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.
I think your comment was relatively ill-received because:
1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).
2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.
3) There was a vibe of “grr, look at that oppressive taboo!” or “Hear me, O naive ideologically-blinkered folks!” That signals to some extent that one is in a “color war” mood, or attracted to the ideological high of striking for one’s views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity’s prospects.
4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it’s a bit annoying to have them presented as some kind of secret knock-down argument.
I didn’t downvote you, but I can see why someone reasonably might. Off the top of my head, in no particular order:
Whole brain emulation isn’t the consensus best path to general AI. My intuition agrees with yours here, but you don’t show any sign that you understand the subtleties involved well enough to be as certain as you are.
Lots of problematic unsupported assertions, e.g. “intelligent people generally have less children than those on the left half of the Bell curve”, “[rich people] are also more likely to have above-average IQs, else they wouldn’t be rich”, and “[violence and docility are] in the genes and the brain that they produce”.
Eugenics!?!
Ok, fine, eugenics, let’s talk about it. Your discussion is naive: you assume that IQ is the right metric to optimize for (see Raising the Sanity Waterline for another perspective), you assume that we can measure it accurately enough to produce the effect you want, you assume that it will go on being an effective metric even after we start conditioning reproductive success on it, and your policy prescriptions are socially inept even by LW standards.
Also, it’s really slow. That seems ok to you because you don’t believe that we’ll otherwise have recursive self-improvement in our lifetimes, but that’s not the consensus view here either.
I’m not interested in debating any of this, I just wanted to give you an outside perspective on your own writing. I hope it helps, and I hope you decide to stick around.