AI improving itself is most likely to look like AI systems doing R&D in the same way that humans do. “AI smart enough to improve itself” is not a crucial threshold, AI systems will get gradually better at improving themselves. Eliezer appears to expect AI systems performing extremely fast recursive self-improvement before those systems are able to make superhuman contributions to other domains (including alignment research), but I think this is mostly unjustified. If Eliezer doesn’t believe this, then his arguments about the alignment problem that humans need to solve appear to be wrong.
One different way I’ve been thinking about this issue recently is that humans have fundamental cognitive limits e.g. brain size that AGI wouldn’t have. There are possible biotech interventions to fix these but the easiest ones (e.g. just increase skull size) still require decades to start up. AI, meanwhile, could be improved (by humans and AIs) on much faster timescales. (How important something like brain size is depends on how much intellectual progress is explained by max intelligence than total intelligence; a naive reading of intellectual history would say max intelligence is important given that a high percentage of relevant human knowledge follows from <100 important thinkers.)
This doesn’t lead me to assign high probability to “takeoff in 1 month”, my expectation is still that AI improving AI will be an extension of humans improving AI (and then centaurs improving AI), but the iteration cycle time could be a lot faster due to AIs not having fundamental human cognitive limits.
My sense is that we are on broadly the same page here. I agree that “AI improving AI over time” will look very different from “humans improving humans over time” or even “biology improving humans over time.” But I think that it will look a lot like “humans improving AI over time,” and that’s what I’d use to estimate timescales (months or years, most likely years) for further AI improvements.
One different way I’ve been thinking about this issue recently is that humans have fundamental cognitive limits e.g. brain size that AGI wouldn’t have. There are possible biotech interventions to fix these but the easiest ones (e.g. just increase skull size) still require decades to start up. AI, meanwhile, could be improved (by humans and AIs) on much faster timescales. (How important something like brain size is depends on how much intellectual progress is explained by max intelligence than total intelligence; a naive reading of intellectual history would say max intelligence is important given that a high percentage of relevant human knowledge follows from <100 important thinkers.)
This doesn’t lead me to assign high probability to “takeoff in 1 month”, my expectation is still that AI improving AI will be an extension of humans improving AI (and then centaurs improving AI), but the iteration cycle time could be a lot faster due to AIs not having fundamental human cognitive limits.
My sense is that we are on broadly the same page here. I agree that “AI improving AI over time” will look very different from “humans improving humans over time” or even “biology improving humans over time.” But I think that it will look a lot like “humans improving AI over time,” and that’s what I’d use to estimate timescales (months or years, most likely years) for further AI improvements.