If AGI happens this decade the risks are very much real and valid and should not be dismissed, certainly not for such a flimsy reason.
Especially considering what people consider the near-term risks, which we can expect to become more and more visible and present, will likely shift the landscape in regards to taking AI x-risk seriously. I posit x-risk won’t remain speculative for long with roughly the same timeline you gave.
Especially considering what people consider the near-term risks, which we can expect to become more and more visible and present, will likely shift the landscape in regards to taking AI x-risk seriously. I posit x-risk won’t remain speculative for long with roughly the same timeline you gave.