That aside, I’m not sure what argument you’re making here.
I do not often comment on Less Wrong. (Although I am starting to, this is one of my first comment!) Hopefully, my thoughts will become clearer as I write more, and get myself more acquainted with the local assumptions and cultural codes.
In the meanwhile, let me expand:
Two possible interpretations that come to mind (probably both of these are wrong):
You’re arguing that all humans in the world will refuse to build dangerous AI, therefore AI won’t be dangerous.
You’re arguing that natural selection doesn’t tell us how hard it is to pull off a pivotal act, since natural selection wasn’t trying to do a pivotal act.
2 seems broadly correct to me, but I don’t see the relevance. Nate and I indeed think that pivotal acts are possible. Nate is using natural selection here to argue against ‘AI progress will be continuous’, not to argue against ‘it’s possible to use sufficiently advanced AI systems to end the acute existential risk period’.
2 is the correct one.
But even though I read the post again with your interpretation in mind, I am still confused about why 2 is irrelevant. Consider:
The techniques you used to train it to allow the operators to shut it down? Those fall apart, and the AGI starts wanting to avoid shutdown, including wanting to deceive you if it’s useful to do so.
Why does alignment fail while capabilities generalize, at least by default and in predictable practice?
On one hand, in the analogy with Natural Selection, “by default” means “When you don’t even try to do alignment, when you 100% optimize for a given goal.”. Ie: When NS optimized for IGF, capabilities generalized, but not alignment. On the other hand, when speaking of alignment directly, “by default” means “Even if you optimize for alignment, but not having in mind some specific considerations”. Ie: Some specific alignment proposals will fail.
My point was that the former is not evidence for the latter.
I do not often comment on Less Wrong. (Although I am starting to, this is one of my first comment!)
Hopefully, my thoughts will become clearer as I write more, and get myself more acquainted with the local assumptions and cultural codes.
In the meanwhile, let me expand:
2 is the correct one.
But even though I read the post again with your interpretation in mind, I am still confused about why 2 is irrelevant. Consider:
On one hand, in the analogy with Natural Selection, “by default” means “When you don’t even try to do alignment, when you 100% optimize for a given goal.”. Ie: When NS optimized for IGF, capabilities generalized, but not alignment.
On the other hand, when speaking of alignment directly, “by default” means “Even if you optimize for alignment, but not having in mind some specific considerations”. Ie: Some specific alignment proposals will fail.
My point was that the former is not evidence for the latter.