I agree there is plenty of the room at the top. The question is how we get their.
I avoided the type of scenarios you describe because we don’t understand them, We can’t quantify them and be sure there will be positive feedback loops.
You have to quantify your uncertainty, at least. I consider it highly likely that there are many novel cognitive modules that an intelligence not far beyond human could envision and construct, that would never even occur to us. But not even this is required.
It seems to me implausible that the cognitive framework we humans have is anywhere near optimal, especially given the difficult mind-hacks it takes us to think clearly about basic problems that aren’t hardwired in. Some really hard problems are solved in blinding speed without conscious awareness, while some very simple problems just don’t have a cognitive organ standing ready, and so need to be (very badly) emulated by verbal areas of the brain. (If we did the mental arithmetic for Bayesian updating the way we do visual processing— or if more complicated hypotheses felt more unlikely— we’d have had spaceflight in 50,000 BC.) We’re cobbled together one codon-change at a time, with old areas of the brain crudely repurposed as culture outstrips genetic change. Thus an AI with cognitive architecture on our level, but able to reprogram itself, would have ample room to become much, much smarter than us, even without going into cognitive realms we can’t yet imagine— simply by integrating modules we can already construct externally, like calculators, into its reasoning and decision processes. Even this, without the further engineering of novel cognitive architecture, looks sufficient for a FOOM relative to human intelligence.
I agree there is plenty of the room at the top. The question is how we get their.
I avoided the type of scenarios you describe because we don’t understand them, We can’t quantify them and be sure there will be positive feedback loops.
You have to quantify your uncertainty, at least. I consider it highly likely that there are many novel cognitive modules that an intelligence not far beyond human could envision and construct, that would never even occur to us. But not even this is required.
It seems to me implausible that the cognitive framework we humans have is anywhere near optimal, especially given the difficult mind-hacks it takes us to think clearly about basic problems that aren’t hardwired in. Some really hard problems are solved in blinding speed without conscious awareness, while some very simple problems just don’t have a cognitive organ standing ready, and so need to be (very badly) emulated by verbal areas of the brain. (If we did the mental arithmetic for Bayesian updating the way we do visual processing— or if more complicated hypotheses felt more unlikely— we’d have had spaceflight in 50,000 BC.) We’re cobbled together one codon-change at a time, with old areas of the brain crudely repurposed as culture outstrips genetic change. Thus an AI with cognitive architecture on our level, but able to reprogram itself, would have ample room to become much, much smarter than us, even without going into cognitive realms we can’t yet imagine— simply by integrating modules we can already construct externally, like calculators, into its reasoning and decision processes. Even this, without the further engineering of novel cognitive architecture, looks sufficient for a FOOM relative to human intelligence.