Sometimes I’ve tried to argue in favor of eugenics. The usual response I got has been something like: “but what if we create a race of super-human beings that wipes us out?”. It’s interesting that people are much more prone to believe it’s possible to create unfriendly human super-intelligence rather than an unfriendly artificial super-intelligence.
It’s interesting that people are much more prone to believe it’s possible to create unfriendly human super-intelligence rather than an unfriendly artificial super-intelligence.
Probably because we have actual history of unfriendly humans who justify genocide by their own perceived superiority?
It’s the “super” part that I’m curious of. Of course we have unfriendly intelligence, but I got the feeling that people in general believe it’s much easier to create a biological super-intelligence than an artificial one.
Sometimes I’ve tried to argue in favor of eugenics. The usual response I got has been something like: “but what if we create a race of super-human beings that wipes us out?”.
They would. That is what eugenics is. No existing people get uplifted, the turnover of population just replaces them by better people.
It’s tangential to the main topic (“why people believe a biological super-intelligence is more probable than an artificial one”), but I think that what you said it’s not warranted at all.
First, we know very little about the biology of intelligence: at present we are not able to explain the current variability in human intelligence, we have even less idea how to genetically enhance it.
Second, we share a psychological unity and genetically improved humans will presumably be grown within human family, so we have a much greater chance for them to share our values.
Third, even if no people get uplifted and the generational change brings about better people, it’s still a net gain for humanity overall.
The existential risk of eugenics, super-people violently replacing normal ones, is the least probable, not the most probable, scenario. I’m not saying we should concentrate on eugenics, I still feel that UFAI is a much bigger threat, I’m saying we should not avoid it because of x-risks.
I agree with all that, I was just running with the idea that one way or another, truly superior beings would in the end displace the rest, with or without actual war.
Now, successful breeding combined with life extension for all, so that present-day average people get to live into a future dominated by the results of several generations of breeding, that could be an interesting scenario for an SF story. I would expect the violence to originate with the marginalised rather than the elite.
You might be interested in “Nobody Home” by Joanna Russ. A woman who’s reasonably bright by modern standards just doesn’t fit in a future where everyone else is much brighter than she is. No violence, just a miserable trap for her.
I don’t consider that future all that plausible—it seems unlikely that there was only one person at that intelligence level.
Interesting. I’ve assumed that the big risk of eugenics (especially if it includes genetic engineering) is that people will choose something stupid and/or we’ll lose too much variation.
Any thoughts about whether we’ll converge on tall, blond, lean, hypomanic, and good at multiple choice tests with a sprinkling of people who look like celebrities, or instead have a wild explosion of physical and mental variation?
Huh, I’ve assumed that the big risk of eugenics is that the ability to reproduce will be used as a measure of social control and status by a not-very-deserving upper class, and will make a lot of people very unhappy. But with genetic engineering, yeah, we could avert that.
Sometimes I’ve tried to argue in favor of eugenics. The usual response I got has been something like: “but what if we create a race of super-human beings that wipes us out?”.
It’s interesting that people are much more prone to believe it’s possible to create unfriendly human super-intelligence rather than an unfriendly artificial super-intelligence.
Probably because we have actual history of unfriendly humans who justify genocide by their own perceived superiority?
It’s the “super” part that I’m curious of. Of course we have unfriendly intelligence, but I got the feeling that people in general believe it’s much easier to create a biological super-intelligence than an artificial one.
They would. That is what eugenics is. No existing people get uplifted, the turnover of population just replaces them by better people.
It’s tangential to the main topic (“why people believe a biological super-intelligence is more probable than an artificial one”), but I think that what you said it’s not warranted at all.
First, we know very little about the biology of intelligence: at present we are not able to explain the current variability in human intelligence, we have even less idea how to genetically enhance it.
Second, we share a psychological unity and genetically improved humans will presumably be grown within human family, so we have a much greater chance for them to share our values.
Third, even if no people get uplifted and the generational change brings about better people, it’s still a net gain for humanity overall.
The existential risk of eugenics, super-people violently replacing normal ones, is the least probable, not the most probable, scenario.
I’m not saying we should concentrate on eugenics, I still feel that UFAI is a much bigger threat, I’m saying we should not avoid it because of x-risks.
I agree with all that, I was just running with the idea that one way or another, truly superior beings would in the end displace the rest, with or without actual war.
Now, successful breeding combined with life extension for all, so that present-day average people get to live into a future dominated by the results of several generations of breeding, that could be an interesting scenario for an SF story. I would expect the violence to originate with the marginalised rather than the elite.
You might be interested in “Nobody Home” by Joanna Russ. A woman who’s reasonably bright by modern standards just doesn’t fit in a future where everyone else is much brighter than she is. No violence, just a miserable trap for her.
I don’t consider that future all that plausible—it seems unlikely that there was only one person at that intelligence level.
The concern is that there will be no more people like us—only the improved(?) model will remain.
That’s how evolution works, natural or artificial, as long as we keep dying.
That’s how evolution works sometimes. In general, there’s a noticeable chance of both species surviving in different niches.
Interesting. I’ve assumed that the big risk of eugenics (especially if it includes genetic engineering) is that people will choose something stupid and/or we’ll lose too much variation.
Any thoughts about whether we’ll converge on tall, blond, lean, hypomanic, and good at multiple choice tests with a sprinkling of people who look like celebrities, or instead have a wild explosion of physical and mental variation?
Huh, I’ve assumed that the big risk of eugenics is that the ability to reproduce will be used as a measure of social control and status by a not-very-deserving upper class, and will make a lot of people very unhappy. But with genetic engineering, yeah, we could avert that.
That depends on what genetic engineering costs.
Does it matter when in ~1 generation we will have the ability to redesign our bodies at will?
Eugenics is a 20th century concern.
Where do you get the 1 generation estimate from?
Kurzweil-like graphs regarding advancements in molecular nanotechnology, plus an understanding of nanomedicine.
What exactly is nanomedicine?
http://en.wikipedia.org/wiki/Nanomedicine