What are the steps? Are we deliberately building a superintelligence with the goal of killing us all? If not, where do the motivation and ability come from?
For me, ability = capability = means. This is one of the two arguments that I said were load bearing. Where will it come from? Well, we are specifically trying to build the most capable systems possible.
Motivation (ie goals) is not actually strictly required. However, there are reasons to think that an AGI could have goals that are not aligned with most humans. The most fundamental is instrumental convergence.
Note that my original comment was not making this case. It was just a meta discussion about what it would take to refute Eliezer’s argument.
What are the steps? Are we deliberately building a superintelligence with the goal of killing us all? If not, where do the motivation and ability come from?
For me, ability = capability = means. This is one of the two arguments that I said were load bearing. Where will it come from? Well, we are specifically trying to build the most capable systems possible.
Motivation (ie goals) is not actually strictly required. However, there are reasons to think that an AGI could have goals that are not aligned with most humans. The most fundamental is instrumental convergence.
Note that my original comment was not making this case. It was just a meta discussion about what it would take to refute Eliezer’s argument.