MIRI thinks that the fact Evolution hasn’t been putting much effort into optimizing for general intelligence is a reason to expect discontinuous progress? Apparently, Paul’s point is that once we realize evolution has been putting little effort into optimizing for general intelligence, we realize we can’t tell much about the likely course of AGI development from evolutionary history, which leaves us in the default position of ignorance. Then, he further argues that the default case is that progress is continuous.
So far as I can tell, Paul’s point is that absent specific reasons to think otherwise, the prima facie case that any time we are trying hard to optimize for some criteria, we should expect the ‘many small changes that add up to one big effect’ situation.
Then he goes on to argue that the specific arguments that AGI is a rare case where this isn’t true (like nuclear weapons) are either wrong or aren’t strong enough to make discontinuous progress plausible.
From what you just wrote, it seems like the folks at MIRI agree that we should have the prima facie expectation of continuous progress, and I’ve read elsewhere that Eliezer thinks the case for recursive self-improvement leading to a discontinuity is weaker or less central than it first seemed. So, are MIRI’s main reasons for disagreeing with Paul down to other arguments (hence the switch from the intelligence explosion hypothesis to the general idea of rapid capability gain)?
I would think the most likely place to disagree with Paul (if not on the intelligence explosion hypothesis) would be if you expected the right combination of breakthroughs exceeds to a ‘generality threshold’ (or ‘secret sauce’ as Paul calls it) that leads to a big jump in capability, but inadequate achievement on any one of the breakthroughs won’t do.
Stuart Russell gives a list of the elements he thinks will be necessary for the ‘secret sauce’ of general intelligence in Human Compatible: human-like language comprehension, cumulative learning, discovering new action sets and managing its own mental activity. (I would add that somebody making that list 30 years ago would have added perception and object recognition, and somebody making it 60 years ago would have also added efficient logical reasoning from known facts). Let’s go with Russell’s list, so we can be a bit more concrete. Perhaps this is your disagreement:
An AI with (e.g.) good perception and object recognition, language comprehension, cumulative learning capability and ability to discover new action sets but a merely adequate or bad ability to manage its mental activity would be (Paul thinks) reasonably capable compared to an AI that is good at all of these things, but (MIRI thinks) it would be much less capable. MIRI has conceptual arguments (to do with the nature of general intelligence) and empirical arguments (comparing human/chimp brains and pragmatic capabilities) in favour of this hypothesis, and Paul thinks the conceptual arguments are too murky and unclear to be persuasive and that the empirical arguments don’t show what MIRI thinks they show. Am I on the right track here?
MIRI thinks that the fact Evolution hasn’t been putting much effort into optimizing for general intelligence is a reason to expect discontinuous progress? Apparently, Paul’s point is that once we realize evolution has been putting little effort into optimizing for general intelligence, we realize we can’t tell much about the likely course of AGI development from evolutionary history, which leaves us in the default position of ignorance. Then, he further argues that the default case is that progress is continuous.
So far as I can tell, Paul’s point is that absent specific reasons to think otherwise, the prima facie case that any time we are trying hard to optimize for some criteria, we should expect the ‘many small changes that add up to one big effect’ situation.
Then he goes on to argue that the specific arguments that AGI is a rare case where this isn’t true (like nuclear weapons) are either wrong or aren’t strong enough to make discontinuous progress plausible.
From what you just wrote, it seems like the folks at MIRI agree that we should have the prima facie expectation of continuous progress, and I’ve read elsewhere that Eliezer thinks the case for recursive self-improvement leading to a discontinuity is weaker or less central than it first seemed. So, are MIRI’s main reasons for disagreeing with Paul down to other arguments (hence the switch from the intelligence explosion hypothesis to the general idea of rapid capability gain)?
I would think the most likely place to disagree with Paul (if not on the intelligence explosion hypothesis) would be if you expected the right combination of breakthroughs exceeds to a ‘generality threshold’ (or ‘secret sauce’ as Paul calls it) that leads to a big jump in capability, but inadequate achievement on any one of the breakthroughs won’t do.
Stuart Russell gives a list of the elements he thinks will be necessary for the ‘secret sauce’ of general intelligence in Human Compatible: human-like language comprehension, cumulative learning, discovering new action sets and managing its own mental activity. (I would add that somebody making that list 30 years ago would have added perception and object recognition, and somebody making it 60 years ago would have also added efficient logical reasoning from known facts). Let’s go with Russell’s list, so we can be a bit more concrete. Perhaps this is your disagreement:
An AI with (e.g.) good perception and object recognition, language comprehension, cumulative learning capability and ability to discover new action sets but a merely adequate or bad ability to manage its mental activity would be (Paul thinks) reasonably capable compared to an AI that is good at all of these things, but (MIRI thinks) it would be much less capable. MIRI has conceptual arguments (to do with the nature of general intelligence) and empirical arguments (comparing human/chimp brains and pragmatic capabilities) in favour of this hypothesis, and Paul thinks the conceptual arguments are too murky and unclear to be persuasive and that the empirical arguments don’t show what MIRI thinks they show. Am I on the right track here?