rwallace has been arguing the position that AI researchers are too concerned (or will become too concerned) about the existential risk from UFAI. He writes that
we need software tools smart enough to help us deal with complexity.
rwallace: can we deal with complexity sufficiently well without new software that engages in strongly-recursive self-improvement?
Without new AGI software?
One part of the risk that rwallace says outweighs the risk of UFAI is that
we remain confined to one little planet . . . with everyone in weapon range of everyone else
The only response rwallace suggests to that risk is
we need more advanced technology, for which we need software tools smart enough to help us deal with complexity
rwallace: please give your reasoning for how more advanced technology decreases the existential risk posed by weapons more than it increases it.
Another part of the risk that rwallace says outweighs the risk of UFAI is that
we remain confined to one little planet running off a dwindling resource base
Please explain how dwindling resources presents a significant existential risk. I can come up with several argument, but I’d like to see the one or two you consider the strongest arguments.
Let us briefly review the discussion up to now since many readers use the the comments page which does not provide much context. rwallace has been arguing that AI researchers are too concerned (or will become too concerned) about the existential risk from reimplementing EURISKO and things like that.
You have mentioned two or three times, rwallace, that without more advanced technology, humans will eventually go extinct. (I quote one of those 2 or 3 mentions below.) You mention that to create and to manage that future advanced technology, civilization will need better tools to manage complexity. Well, I see one possible objection to your argument right there, in that better science and better technology might well decrease the complexity of the cultural information humans are required to keep on top of. Consider that once Newton gave our civilization a correct theory of dynamics, almost all of the books written before Newton on dynamics could safely be thrown away (the exceptions being books by Descartes and Galileo that help people understand Newton and put Newton in historical context) which of course constitutes a net reduction in the complexity of the cultural information that our civilization has to keep on top of. (If it does not seem like a reduction, that is because the possession of Newtonian dynamical theory made our civilization more ambitious about what goals to try for.)
But please explain to me what your argument has to do with EURISKO and things like that: is it your position that the complexity of future human culture can be managed only with better AGI software?
And do you maintain that that software cannot be developed fast enough by AGI researchers such as Eliezer who are being very careful about existential risks?
In general, the things you argue are dangerous are slow dangers. You yourself refer to “geological evidence” which suggests that they are dangerous on geological timescales.
In contrast, research into certain areas of AI seems to me genuinely fast dangers: things with a high probability of wiping out our civilization in the next 30, 50 or 100 years. It seems unwise to increase fast dangers to decrease slow dangers. But I suppose you disagree that AGI research if not done very carefully is a fast danger. (I’m still studying your arguments on that.)