I’m quite confident that it’s possible, but not very confident that such a thing would be likely the first general superintelligence. I expect a period during which humans develop increasingly better models, until one or more of those can develop more generally better models by itself. The last capability isn’t necessary to AI-caused doom, but it’s certainly one that would greatly increase the risks.
One of my biggest contributors to “no AI doom” credence is that there are technical problems that prevent us from ever developing anything sufficiently smarter than ourselves to threaten our survival. I don’t think it’s certain that we can do that—but I think the odds are that we can, almost certain that we will if we can, and likely comparatively soon (decades rather than centuries or millennia).
I’m quite confident that it’s possible, but not very confident that such a thing would be likely the first general superintelligence. I expect a period during which humans develop increasingly better models, until one or more of those can develop more generally better models by itself. The last capability isn’t necessary to AI-caused doom, but it’s certainly one that would greatly increase the risks.
One of my biggest contributors to “no AI doom” credence is that there are technical problems that prevent us from ever developing anything sufficiently smarter than ourselves to threaten our survival. I don’t think it’s certain that we can do that—but I think the odds are that we can, almost certain that we will if we can, and likely comparatively soon (decades rather than centuries or millennia).