“Strongly-recursive self-improvement” is a figment of the imagination; among the logical errors involved is confusion between properties of a program and properties of the world.
As for the rest: do you believe humanity can survive permanently as we are now, confined to this planet? If you do, then I will point you to the geological evidence to the contrary. If not, then it follows that without more advanced technology, we are dead. Neither I nor anybody else can know what will be the proximate cause of death for the last individual, or in what century, but certain extinction is certain extinction nonetheless.
Let us briefly review the discussion up to now since many readers use the the comments page which does not provide much context. rwallace has been arguing that AI researchers are too concerned (or will become too concerned) about the existential risk from reimplementing EURISKO and things like that.
You have mentioned two or three times, rwallace, that without more advanced technology, humans will eventually go extinct. (I quote one of those 2 or 3 mentions below.) You mention that to create and to manage that future advanced technology, civilization will need better tools to manage complexity. Well, I see one possible objection to your argument right there, in that better science and better technology might well decrease the complexity of the cultural information humans are required to keep on top of. Consider that once Newton gave our civilization a correct theory of dynamics, almost all of the books written before Newton on dynamics could safely be thrown away (the exceptions being books by Descartes and Galileo that help people understand Newton and put Newton in historical context) which of course constitutes a net reduction in the complexity of the cultural information that our civilization has to keep on top of. (If it does not seem like a reduction, that is because the possession of Newtonian dynamical theory made our civilization more ambitious about what goals to try for.)
do you believe humanity can survive permanently as we are now, confined to this planet? If you do, then I will point you to the geological evidence to the contrary. If not, then it follows that without more advanced technology, we are dead.
But please explain to me what your argument has to do with EURISKO and things like that: is it your position that the complexity of future human culture can be managed only with better AGI software?
And do you maintain that that software cannot be developed fast enough by AGI researchers such as Eliezer who are being very careful about existential risks?
In general, the things you argue are dangerous are slow dangers. You yourself refer to “geological evidence” which suggests that they are dangerous on geological timescales.
In contrast, research into certain areas of AI seems to me genuinely fast dangers: things with a high probability of wiping out our civilization in the next 30, 50 or 100 years. It seems unwise to increase fast dangers to decrease slow dangers. But I suppose you disagree that AGI research if not done very carefully is a fast danger. (I’m still studying your arguments on that.)
Reduction in complexity is at least conceivable, I’ll grant. For example if someone invented a zero point energy generator with the cost and form factor of an AA battery, much of the complexity associated with the energy industry could disappear.
But this seems highly unlikely. All the current evidence suggests the contrary: the breakthroughs that will be necessary and possible in the coming century will be precisely those of complex systems (in both senses of the term).
Human level AGI in the near future is indeed neither necessary nor possible. But there is a vast gap between that and what we have today, and we will, yes, need to fill some of that gap. Perhaps a key breakthrough would have come from a young researcher who would have re-implemented Eurisko and from the experiment acquired a critical jump in understanding—and who has now quietly left, thinking Eurisko might blow up the world, to reconsider that job offer from Electronic Arts.
I do disagree that AGI research is a fast danger. I will grant you that there is a sense in which the dangers I am worried about are slow ones - barring unlikely events like a large asteroid impact (which is likely only over longer time scales), I’m confident humanity will still exist 100 years from now.
But our window of opportunity may not. Consider that civilizations are mortal, for reasons unrelated to this conversation. Consider that environments conducive to scientific progress are even considerably rarer and more transient than civilization itself. Consider also that the environment in which our civilization arose is gone, and is not coming back. (For the simplest example, while fossil fuels still exist, the easily accessible deposits thereof, so important for bootstrapping, are largely gone.) I think it quite possible that the 21st-century may be the last hard step in the Great Filter, that by the year 2100 the ultimate fate of humanity may in fact have been decided, even if nobody on that date yet knows it. I cannot of course be certain of this, but I think it likely enough that we cannot afford to risk wasting this window of opportunity.
One problem with this argument is how conjunctive it is: “(A) Progress crucially depends on breakthroughs in complexity management and (B) strong recursive self-improvement is impossible and (C) near-future human level AGI is neither dangerous nor possible but (D) someone working on it is crucial for said complexity management breakthroughs and (E) they’re dissuaded by friendliness concerns and (F) our scientific window of opportunity is small.”
My back-of-the-envelope, generous probabilities:
A. 0.5, this is a pretty strong requirement.
B. 0.9, for simplicity, giving your speculation the benefit of the doubt.
C. 0.9, same.
D. 0.1, a genuine problem of this magnitude is going to attract a lot of diverse talent.
E. 0.01, this is the most demanding element of the scenario, that the UFAI meme itself will crucially disrupt progress.
F. 0.05, this would represent a large break from our current form of steady scientific progress, and I haven’t yet seen much evidence that it’s terribly likely.
That product comes out to roughly 1:50,000. I’m guessing you think the actual figure is higher, and expect you’ll contest those specific numbers, but would you agree that I’ve fairly characterized the structure of your objection to FAI?
rhollerith:
“Strongly-recursive self-improvement” is a figment of the imagination; among the logical errors involved is confusion between properties of a program and properties of the world.
As for the rest: do you believe humanity can survive permanently as we are now, confined to this planet? If you do, then I will point you to the geological evidence to the contrary. If not, then it follows that without more advanced technology, we are dead. Neither I nor anybody else can know what will be the proximate cause of death for the last individual, or in what century, but certain extinction is certain extinction nonetheless.
Let us briefly review the discussion up to now since many readers use the the comments page which does not provide much context. rwallace has been arguing that AI researchers are too concerned (or will become too concerned) about the existential risk from reimplementing EURISKO and things like that.
You have mentioned two or three times, rwallace, that without more advanced technology, humans will eventually go extinct. (I quote one of those 2 or 3 mentions below.) You mention that to create and to manage that future advanced technology, civilization will need better tools to manage complexity. Well, I see one possible objection to your argument right there, in that better science and better technology might well decrease the complexity of the cultural information humans are required to keep on top of. Consider that once Newton gave our civilization a correct theory of dynamics, almost all of the books written before Newton on dynamics could safely be thrown away (the exceptions being books by Descartes and Galileo that help people understand Newton and put Newton in historical context) which of course constitutes a net reduction in the complexity of the cultural information that our civilization has to keep on top of. (If it does not seem like a reduction, that is because the possession of Newtonian dynamical theory made our civilization more ambitious about what goals to try for.)
But please explain to me what your argument has to do with EURISKO and things like that: is it your position that the complexity of future human culture can be managed only with better AGI software?
And do you maintain that that software cannot be developed fast enough by AGI researchers such as Eliezer who are being very careful about existential risks?
In general, the things you argue are dangerous are slow dangers. You yourself refer to “geological evidence” which suggests that they are dangerous on geological timescales.
In contrast, research into certain areas of AI seems to me genuinely fast dangers: things with a high probability of wiping out our civilization in the next 30, 50 or 100 years. It seems unwise to increase fast dangers to decrease slow dangers. But I suppose you disagree that AGI research if not done very carefully is a fast danger. (I’m still studying your arguments on that.)
Reduction in complexity is at least conceivable, I’ll grant. For example if someone invented a zero point energy generator with the cost and form factor of an AA battery, much of the complexity associated with the energy industry could disappear.
But this seems highly unlikely. All the current evidence suggests the contrary: the breakthroughs that will be necessary and possible in the coming century will be precisely those of complex systems (in both senses of the term).
Human level AGI in the near future is indeed neither necessary nor possible. But there is a vast gap between that and what we have today, and we will, yes, need to fill some of that gap. Perhaps a key breakthrough would have come from a young researcher who would have re-implemented Eurisko and from the experiment acquired a critical jump in understanding—and who has now quietly left, thinking Eurisko might blow up the world, to reconsider that job offer from Electronic Arts.
I do disagree that AGI research is a fast danger. I will grant you that there is a sense in which the dangers I am worried about are slow ones - barring unlikely events like a large asteroid impact (which is likely only over longer time scales), I’m confident humanity will still exist 100 years from now.
But our window of opportunity may not. Consider that civilizations are mortal, for reasons unrelated to this conversation. Consider that environments conducive to scientific progress are even considerably rarer and more transient than civilization itself. Consider also that the environment in which our civilization arose is gone, and is not coming back. (For the simplest example, while fossil fuels still exist, the easily accessible deposits thereof, so important for bootstrapping, are largely gone.) I think it quite possible that the 21st-century may be the last hard step in the Great Filter, that by the year 2100 the ultimate fate of humanity may in fact have been decided, even if nobody on that date yet knows it. I cannot of course be certain of this, but I think it likely enough that we cannot afford to risk wasting this window of opportunity.
One problem with this argument is how conjunctive it is: “(A) Progress crucially depends on breakthroughs in complexity management and (B) strong recursive self-improvement is impossible and (C) near-future human level AGI is neither dangerous nor possible but (D) someone working on it is crucial for said complexity management breakthroughs and (E) they’re dissuaded by friendliness concerns and (F) our scientific window of opportunity is small.”
My back-of-the-envelope, generous probabilities:
A. 0.5, this is a pretty strong requirement.
B. 0.9, for simplicity, giving your speculation the benefit of the doubt.
C. 0.9, same.
D. 0.1, a genuine problem of this magnitude is going to attract a lot of diverse talent.
E. 0.01, this is the most demanding element of the scenario, that the UFAI meme itself will crucially disrupt progress.
F. 0.05, this would represent a large break from our current form of steady scientific progress, and I haven’t yet seen much evidence that it’s terribly likely.
That product comes out to roughly 1:50,000. I’m guessing you think the actual figure is higher, and expect you’ll contest those specific numbers, but would you agree that I’ve fairly characterized the structure of your objection to FAI?