As an optimist/skeptic of the alignment problem, I can confirm that code generation is much more persuasive to me than other AI risk hypotheticals. I’m still not convinced that an early AGI would be powerful enough to hack the internet, or that hacking the internet could lead to x-risk (there’s still a lot that is not computerized and civilization is very resilient), but at least code generation is at the intersection of “it’s plausible that an early AGI would have super human coding capacity” and “hacking the internet is a plausible thing to do” (unlike nanotech or an extinction-causing super-plague).
As an optimist/skeptic of the alignment problem, I can confirm that code generation is much more persuasive to me than other AI risk hypotheticals. I’m still not convinced that an early AGI would be powerful enough to hack the internet, or that hacking the internet could lead to x-risk (there’s still a lot that is not computerized and civilization is very resilient), but at least code generation is at the intersection of “it’s plausible that an early AGI would have super human coding capacity” and “hacking the internet is a plausible thing to do” (unlike nanotech or an extinction-causing super-plague).