Reduction in complexity is at least conceivable, I’ll grant. For example if someone invented a zero point energy generator with the cost and form factor of an AA battery, much of the complexity associated with the energy industry could disappear.
But this seems highly unlikely. All the current evidence suggests the contrary: the breakthroughs that will be necessary and possible in the coming century will be precisely those of complex systems (in both senses of the term).
Human level AGI in the near future is indeed neither necessary nor possible. But there is a vast gap between that and what we have today, and we will, yes, need to fill some of that gap. Perhaps a key breakthrough would have come from a young researcher who would have re-implemented Eurisko and from the experiment acquired a critical jump in understanding—and who has now quietly left, thinking Eurisko might blow up the world, to reconsider that job offer from Electronic Arts.
I do disagree that AGI research is a fast danger. I will grant you that there is a sense in which the dangers I am worried about are slow ones - barring unlikely events like a large asteroid impact (which is likely only over longer time scales), I’m confident humanity will still exist 100 years from now.
But our window of opportunity may not. Consider that civilizations are mortal, for reasons unrelated to this conversation. Consider that environments conducive to scientific progress are even considerably rarer and more transient than civilization itself. Consider also that the environment in which our civilization arose is gone, and is not coming back. (For the simplest example, while fossil fuels still exist, the easily accessible deposits thereof, so important for bootstrapping, are largely gone.) I think it quite possible that the 21st-century may be the last hard step in the Great Filter, that by the year 2100 the ultimate fate of humanity may in fact have been decided, even if nobody on that date yet knows it. I cannot of course be certain of this, but I think it likely enough that we cannot afford to risk wasting this window of opportunity.
One problem with this argument is how conjunctive it is: “(A) Progress crucially depends on breakthroughs in complexity management and (B) strong recursive self-improvement is impossible and (C) near-future human level AGI is neither dangerous nor possible but (D) someone working on it is crucial for said complexity management breakthroughs and (E) they’re dissuaded by friendliness concerns and (F) our scientific window of opportunity is small.”
My back-of-the-envelope, generous probabilities:
A. 0.5, this is a pretty strong requirement.
B. 0.9, for simplicity, giving your speculation the benefit of the doubt.
C. 0.9, same.
D. 0.1, a genuine problem of this magnitude is going to attract a lot of diverse talent.
E. 0.01, this is the most demanding element of the scenario, that the UFAI meme itself will crucially disrupt progress.
F. 0.05, this would represent a large break from our current form of steady scientific progress, and I haven’t yet seen much evidence that it’s terribly likely.
That product comes out to roughly 1:50,000. I’m guessing you think the actual figure is higher, and expect you’ll contest those specific numbers, but would you agree that I’ve fairly characterized the structure of your objection to FAI?
Reduction in complexity is at least conceivable, I’ll grant. For example if someone invented a zero point energy generator with the cost and form factor of an AA battery, much of the complexity associated with the energy industry could disappear.
But this seems highly unlikely. All the current evidence suggests the contrary: the breakthroughs that will be necessary and possible in the coming century will be precisely those of complex systems (in both senses of the term).
Human level AGI in the near future is indeed neither necessary nor possible. But there is a vast gap between that and what we have today, and we will, yes, need to fill some of that gap. Perhaps a key breakthrough would have come from a young researcher who would have re-implemented Eurisko and from the experiment acquired a critical jump in understanding—and who has now quietly left, thinking Eurisko might blow up the world, to reconsider that job offer from Electronic Arts.
I do disagree that AGI research is a fast danger. I will grant you that there is a sense in which the dangers I am worried about are slow ones - barring unlikely events like a large asteroid impact (which is likely only over longer time scales), I’m confident humanity will still exist 100 years from now.
But our window of opportunity may not. Consider that civilizations are mortal, for reasons unrelated to this conversation. Consider that environments conducive to scientific progress are even considerably rarer and more transient than civilization itself. Consider also that the environment in which our civilization arose is gone, and is not coming back. (For the simplest example, while fossil fuels still exist, the easily accessible deposits thereof, so important for bootstrapping, are largely gone.) I think it quite possible that the 21st-century may be the last hard step in the Great Filter, that by the year 2100 the ultimate fate of humanity may in fact have been decided, even if nobody on that date yet knows it. I cannot of course be certain of this, but I think it likely enough that we cannot afford to risk wasting this window of opportunity.
One problem with this argument is how conjunctive it is: “(A) Progress crucially depends on breakthroughs in complexity management and (B) strong recursive self-improvement is impossible and (C) near-future human level AGI is neither dangerous nor possible but (D) someone working on it is crucial for said complexity management breakthroughs and (E) they’re dissuaded by friendliness concerns and (F) our scientific window of opportunity is small.”
My back-of-the-envelope, generous probabilities:
A. 0.5, this is a pretty strong requirement.
B. 0.9, for simplicity, giving your speculation the benefit of the doubt.
C. 0.9, same.
D. 0.1, a genuine problem of this magnitude is going to attract a lot of diverse talent.
E. 0.01, this is the most demanding element of the scenario, that the UFAI meme itself will crucially disrupt progress.
F. 0.05, this would represent a large break from our current form of steady scientific progress, and I haven’t yet seen much evidence that it’s terribly likely.
That product comes out to roughly 1:50,000. I’m guessing you think the actual figure is higher, and expect you’ll contest those specific numbers, but would you agree that I’ve fairly characterized the structure of your objection to FAI?