Not many more fundamental innovations needed for AGI.
Can you say more about this? Does the DeepMind AGI safety team have ideas about what’s blocking AGI that could be addressed by not many more fundamental innovations?
If we did have such ideas we would not be likely to write about them publicly.
(That being said, I roughly believe “if you keep scaling things with some engineering work to make sure everything still works, the models will keep getting better, and this would eventually get you to transformative AI if you can keep the scaling going”.)
Can you say more about this? Does the DeepMind AGI safety team have ideas about what’s blocking AGI that could be addressed by not many more fundamental innovations?
If we did have such ideas we would not be likely to write about them publicly.
(That being said, I roughly believe “if you keep scaling things with some engineering work to make sure everything still works, the models will keep getting better, and this would eventually get you to transformative AI if you can keep the scaling going”.)