Right, I’m not claiming that AGI will do anything like straightforwardly maximize human utility. I’m claiming that if we work hard enough at teaching it to avoid disaster, it has a significant chance of avoiding disaster.
The fact that nobody is artificially mass-producing their genes is not a disaster from Darwin’s point of view; Darwin is vaguely satisfied that instead of a million humans there are now 7 billion humans. If the population stabilizes at 11 billion, that is also not a Darwinian disaster. If the population spreads across the galaxy, mostly in the form of emulations and AIs, but with even 0.001% of sentient beings maintaining some human DNA as a pet or a bit of nostalgia, that’s still way more copies of our DNA than the Neanderthals were ever going to get.
There are probably some really convincing analogies or intuition pumps somewhere that show that values are likely to be obliterated after a jump in intelligence, but I really don’t think evolution/contraception is one of those analogies.
I’m claiming that if we work hard enough at teaching it to avoid disaster, it has a significant chance of avoiding disaster.
As stated, I think Eliezer and I, and nearly everyone else, would agree with this.
The fact that nobody is artificially mass-producing their genes is not a disaster from Darwin’s point of view; Darwin is vaguely satisfied that instead of a million humans there are now 7 billion humans.
?? Why would human natural selection be satisfied with 7 billion but not satisfied with a million? Seems like you could equally say ‘natural selection is satisfied with a million, since at least a million is higher than a thousand’. Or ‘natural selection is satisfied with a hundred, since at least a hundred is higher than fifty’.
I understand the idea of extracting from a population’s process of natural selection a pseudo-goal, ‘maximize inclusive genetic fitness’; I don’t understand the idea of adding that natural selection has some threshold where it ‘feels’ ‘satisfied’.
Sure, the metaphor is strained because natural selection doesn’t have feelings, so it’s never going to feel satisfied, because it’s never going to feel anything. For whatever it’s worth, I didn’t pick that metaphor; Eliezer mentions contraception in his original post.
As I understand it, the point of bringing up contraception is to show that when you move from one level of intelligence to another, much higher level of intelligence, then the more intelligent agent can wind up optimizing for values that would be anathema to the less intelligent agents, even if the less intelligent agents have done everything they can to pass along their values. My objection to this illustration is that I don’t think anyone’s demonstrated that human goals could plausibly be described as “anathema” to natural selection. Overall, humans are pursuing a set of goals that are relatively well-aligned with natural selection’s pseudo-goals.
Right, I’m not claiming that AGI will do anything like straightforwardly maximize human utility. I’m claiming that if we work hard enough at teaching it to avoid disaster, it has a significant chance of avoiding disaster.
The fact that nobody is artificially mass-producing their genes is not a disaster from Darwin’s point of view; Darwin is vaguely satisfied that instead of a million humans there are now 7 billion humans. If the population stabilizes at 11 billion, that is also not a Darwinian disaster. If the population spreads across the galaxy, mostly in the form of emulations and AIs, but with even 0.001% of sentient beings maintaining some human DNA as a pet or a bit of nostalgia, that’s still way more copies of our DNA than the Neanderthals were ever going to get.
There are probably some really convincing analogies or intuition pumps somewhere that show that values are likely to be obliterated after a jump in intelligence, but I really don’t think evolution/contraception is one of those analogies.
As stated, I think Eliezer and I, and nearly everyone else, would agree with this.
?? Why would human natural selection be satisfied with 7 billion but not satisfied with a million? Seems like you could equally say ‘natural selection is satisfied with a million, since at least a million is higher than a thousand’. Or ‘natural selection is satisfied with a hundred, since at least a hundred is higher than fifty’.
I understand the idea of extracting from a population’s process of natural selection a pseudo-goal, ‘maximize inclusive genetic fitness’; I don’t understand the idea of adding that natural selection has some threshold where it ‘feels’ ‘satisfied’.
Sure, the metaphor is strained because natural selection doesn’t have feelings, so it’s never going to feel satisfied, because it’s never going to feel anything. For whatever it’s worth, I didn’t pick that metaphor; Eliezer mentions contraception in his original post.
As I understand it, the point of bringing up contraception is to show that when you move from one level of intelligence to another, much higher level of intelligence, then the more intelligent agent can wind up optimizing for values that would be anathema to the less intelligent agents, even if the less intelligent agents have done everything they can to pass along their values. My objection to this illustration is that I don’t think anyone’s demonstrated that human goals could plausibly be described as “anathema” to natural selection. Overall, humans are pursuing a set of goals that are relatively well-aligned with natural selection’s pseudo-goals.