(Sorry for delay, I thought I had notifications set up but apparently not)
I don’t at the moment have a comprehensive taxonomy of the possible scenarios. The two I mentioned above… well, at a high level, what’s going on is that (a) CAIS seems implausible to me in various ways—e.g. it seems to me that more unified and agenty AI would be able to outcompete comprehensive AI systems in a variety of important domains, and (b) I haven’t heard a convincing account of what’s wrong with the classic scenario. The accounts that I’ve heard usually turn out to be straw men (e.g. claiming that the classic scenario depends on intelligence being a single, unified trait) or merely pointing out that other scenarios are plausible too (e.g. Paul’s point that we could get lots of crazy transformative AI things happening in the few years leading up to human-level AGI).
(Sorry for delay, I thought I had notifications set up but apparently not)
I don’t at the moment have a comprehensive taxonomy of the possible scenarios. The two I mentioned above… well, at a high level, what’s going on is that (a) CAIS seems implausible to me in various ways—e.g. it seems to me that more unified and agenty AI would be able to outcompete comprehensive AI systems in a variety of important domains, and (b) I haven’t heard a convincing account of what’s wrong with the classic scenario. The accounts that I’ve heard usually turn out to be straw men (e.g. claiming that the classic scenario depends on intelligence being a single, unified trait) or merely pointing out that other scenarios are plausible too (e.g. Paul’s point that we could get lots of crazy transformative AI things happening in the few years leading up to human-level AGI).