Yes, I agree that that’s what the post was talking about. I do think my comment is still relevant since the transition time from pre-superintelligence human-level-AGI economy to superintelligent-AGI-economy may be just a few months. Indeed, that is exactly what I expect due to the high likelihood I place on the rapid effects of recursive self-improvement enabled by human-level-AGI.
I would expect that the company developing the human-level-AGI may even observe the beginnings of a successful RSI process and choose to pursue that in secret without ever releasing the human-level-AGI. Afterall, if it’s useful for rapid potent RSI then it would give their competitors a chance to catch up or overtake them if their competitors also had access to the human-level-AGI.
Thus, from the point of view of outside observers, it may seem that we jump straight from no-AGI to a world affected by technology developed by superintelligent-AGI, without ever seeing either the human-level-AGI or the superintelligent-AGI deployed.
[Edit: FWIW I think that Tom Davidson’s report errs slightly in the other direction, of forecasting that things in the physical world might move somewhat faster than I expect. Maybe 1.2x to 5x faster than I expect. So that puts me somewhere in-between world-as-normal and world-goes-crazy in terms of physical infrastructure and industry. On the other hand, I think Tom Davidson’s report somewhat underestimates the rate at which algorithm / intelligence improvement could occur. Algorithmic improvements which ‘piggyback’ on existing models, and thus start warm and improve cheaply from there, could have quite fast effects with no wait time for additional compute to become available or need to retrain from scratch. If that sort of thing snowballed, which I think it might, the result could get quite capable quite fast. And that would all be happening in software changes, and behind closed doors, so the public wouldn’t necessarily know anything about it.]
This seems right, though I’d interpreted the context of Sarah’s post to be more about what we expect in a pre-superintelligence economy.
Yes, I agree that that’s what the post was talking about. I do think my comment is still relevant since the transition time from pre-superintelligence human-level-AGI economy to superintelligent-AGI-economy may be just a few months. Indeed, that is exactly what I expect due to the high likelihood I place on the rapid effects of recursive self-improvement enabled by human-level-AGI.
I would expect that the company developing the human-level-AGI may even observe the beginnings of a successful RSI process and choose to pursue that in secret without ever releasing the human-level-AGI. Afterall, if it’s useful for rapid potent RSI then it would give their competitors a chance to catch up or overtake them if their competitors also had access to the human-level-AGI.
Thus, from the point of view of outside observers, it may seem that we jump straight from no-AGI to a world affected by technology developed by superintelligent-AGI, without ever seeing either the human-level-AGI or the superintelligent-AGI deployed.
[Edit: FWIW I think that Tom Davidson’s report errs slightly in the other direction, of forecasting that things in the physical world might move somewhat faster than I expect. Maybe 1.2x to 5x faster than I expect. So that puts me somewhere in-between world-as-normal and world-goes-crazy in terms of physical infrastructure and industry. On the other hand, I think Tom Davidson’s report somewhat underestimates the rate at which algorithm / intelligence improvement could occur. Algorithmic improvements which ‘piggyback’ on existing models, and thus start warm and improve cheaply from there, could have quite fast effects with no wait time for additional compute to become available or need to retrain from scratch. If that sort of thing snowballed, which I think it might, the result could get quite capable quite fast. And that would all be happening in software changes, and behind closed doors, so the public wouldn’t necessarily know anything about it.]