I notice I am confused by two assumptions about STEM-capable AGI and its ascent:
Assumption 1: The difficulty of self-improvement of an intelligent system is either linear, or if not, its less steep over time than its increase in capabilities. (counter scenario: an AI system achieves human level intelligence, then soon after intelligence 200% of an average human. Once it reaches say, 248% of human intelligence it hits an unforeseen roadblock because achieving 249% of human intelligence in any way is a Really Hard Problem, orders of magnitude beyond passing the 248% mark. )
Assumption 2: AI capabilities to self improve exceed its own complexity at all times. This is kind of a special case of Assumption 1. (counter scenario, complexity is either always, or at some point, greater than the capability, and it becomes an inescapable catch-22).
I guess that the hidden Assumption 0 for both is “every STEM problem is solvable in a realistic timeline, if you just throw enough intelligence at it.” For my STEM-ignorant mind, it seems like some problems are either effectively unsolvable (ie: turning the entire universe into computronium and crunching until HDoTU won’t crack it) or not solvable in the human-meaningful future (turning Jupiter into computronium and crunching for 13 million years is required) or, finally, borderline unsolvable due to catch-22 (inventing computronium is so complex you need a bucket of computronium to crunch it).
I notice I am confused by two assumptions about STEM-capable AGI and its ascent:
Assumption 1: The difficulty of self-improvement of an intelligent system is either linear, or if not, its less steep over time than its increase in capabilities. (counter scenario: an AI system achieves human level intelligence, then soon after intelligence 200% of an average human. Once it reaches say, 248% of human intelligence it hits an unforeseen roadblock because achieving 249% of human intelligence in any way is a Really Hard Problem, orders of magnitude beyond passing the 248% mark. )
Assumption 2: AI capabilities to self improve exceed its own complexity at all times. This is kind of a special case of Assumption 1. (counter scenario, complexity is either always, or at some point, greater than the capability, and it becomes an inescapable catch-22).
I guess that the hidden Assumption 0 for both is “every STEM problem is solvable in a realistic timeline, if you just throw enough intelligence at it.” For my STEM-ignorant mind, it seems like some problems are either effectively unsolvable (ie: turning the entire universe into computronium and crunching until HDoTU won’t crack it) or not solvable in the human-meaningful future (turning Jupiter into computronium and crunching for 13 million years is required) or, finally, borderline unsolvable due to catch-22 (inventing computronium is so complex you need a bucket of computronium to crunch it).
Can you lead me to understanding why I’m wrong?