(Sorry for delay, I thought I had notifications set up but apparently not)
I don’t at the moment have a comprehensive taxonomy of the possible scenarios. The two I mentioned above… well, at a high level, what’s going on is that (a) CAIS seems implausible to me in various ways—e.g. it seems to me that more unified and agenty AI would be able to outcompete comprehensive AI systems in a variety of important domains, and (b) I haven’t heard a convincing account of what’s wrong with the classic scenario. The accounts that I’ve heard usually turn out to be straw men (e.g. claiming that the classic scenario depends on intelligence being a single, unified trait) or merely pointing out that other scenarios are plausible too (e.g. Paul’s point that we could get lots of crazy transformative AI things happening in the few years leading up to human-level AGI).
I’ve seen that I shouldn’t argue with people, but for people I’ve found them more persuasive than if I told them they’re wrong, as a form of argument. This post argues that in a rationalist society, anyone who gives evidence against something would be epistemically rude, regardless of whether they “perceive evidence from other people”. So it is really hard to argue with the people you would be arguing with if you don’t believe them, and your best estimate is that it’s a good idea.
Question: how do you evaluate the plausibility of each scenario, and potentially of other ways the AI development timeline might go?
(Sorry for delay, I thought I had notifications set up but apparently not)
I don’t at the moment have a comprehensive taxonomy of the possible scenarios. The two I mentioned above… well, at a high level, what’s going on is that (a) CAIS seems implausible to me in various ways—e.g. it seems to me that more unified and agenty AI would be able to outcompete comprehensive AI systems in a variety of important domains, and (b) I haven’t heard a convincing account of what’s wrong with the classic scenario. The accounts that I’ve heard usually turn out to be straw men (e.g. claiming that the classic scenario depends on intelligence being a single, unified trait) or merely pointing out that other scenarios are plausible too (e.g. Paul’s point that we could get lots of crazy transformative AI things happening in the few years leading up to human-level AGI).
I’ve seen that I shouldn’t argue with people, but for people I’ve found them more persuasive than if I told them they’re wrong, as a form of argument. This post argues that in a rationalist society, anyone who gives evidence against something would be epistemically rude, regardless of whether they “perceive evidence from other people”. So it is really hard to argue with the people you would be arguing with if you don’t believe them, and your best estimate is that it’s a good idea.