… if you operate humans grossly out of distribution by asking them to supervise or control ASI, or even much better than human AGI...
… and if their control is actively meaningful in that they’re not just being manipulated to have the ASI do exactly what it would want to do anyway...
… then even if the ASI is actively trying to help as much as it can under that constraint...
… you’ll be lucky to have 1e1 years before the humans destroy the world, give up the control on purpose, lose the control by accident, lock in some kind of permanent (probably dystopian) stasis that will prevent the growth you suggest, or somehow render the entire question moot.
I also don’t think that humans are physically capable of doing much better than they do now, no matter how long they have to improve. And I don’t think that anything augmented enough to do substantially better would qualify as human.
I tend to think that...
… if you operate humans grossly out of distribution by asking them to supervise or control ASI, or even much better than human AGI...
… and if their control is actively meaningful in that they’re not just being manipulated to have the ASI do exactly what it would want to do anyway...
… then even if the ASI is actively trying to help as much as it can under that constraint...
… you’ll be lucky to have 1e1 years before the humans destroy the world, give up the control on purpose, lose the control by accident, lock in some kind of permanent (probably dystopian) stasis that will prevent the growth you suggest, or somehow render the entire question moot.
I also don’t think that humans are physically capable of doing much better than they do now, no matter how long they have to improve. And I don’t think that anything augmented enough to do substantially better would qualify as human.