I think the intuition you have for recursive self-improvement is that of a machine running essentially the same program at each stage, just faster. That’s not what’s meant. Human brains aren’t sped-up chimp brains; they have some innate cognitive modules that allow them to learn, communicate, model cause and effect, and change the world in ways chimps simply can’t. We don’t know what cognitive modules are even possible, let alone useful; but it seems clear there’s ‘plenty of room at the top’, and that a recursively self-improving intelligence could program and then use such modules. Not to go Vernor Vinge on you, but if a few patched-together modules made the difference between chimp and human technology, then a few deliberately engineered ones could leap to what we consider absurd levels of control over the physical world.
I agree there is plenty of the room at the top. The question is how we get their.
I avoided the type of scenarios you describe because we don’t understand them, We can’t quantify them and be sure there will be positive feedback loops.
You have to quantify your uncertainty, at least. I consider it highly likely that there are many novel cognitive modules that an intelligence not far beyond human could envision and construct, that would never even occur to us. But not even this is required.
It seems to me implausible that the cognitive framework we humans have is anywhere near optimal, especially given the difficult mind-hacks it takes us to think clearly about basic problems that aren’t hardwired in. Some really hard problems are solved in blinding speed without conscious awareness, while some very simple problems just don’t have a cognitive organ standing ready, and so need to be (very badly) emulated by verbal areas of the brain. (If we did the mental arithmetic for Bayesian updating the way we do visual processing— or if more complicated hypotheses felt more unlikely— we’d have had spaceflight in 50,000 BC.) We’re cobbled together one codon-change at a time, with old areas of the brain crudely repurposed as culture outstrips genetic change. Thus an AI with cognitive architecture on our level, but able to reprogram itself, would have ample room to become much, much smarter than us, even without going into cognitive realms we can’t yet imagine— simply by integrating modules we can already construct externally, like calculators, into its reasoning and decision processes. Even this, without the further engineering of novel cognitive architecture, looks sufficient for a FOOM relative to human intelligence.
I think the intuition you have for recursive self-improvement is that of a machine running essentially the same program at each stage, just faster. That’s not what’s meant. Human brains aren’t sped-up chimp brains; they have some innate cognitive modules that allow them to learn, communicate, model cause and effect, and change the world in ways chimps simply can’t. We don’t know what cognitive modules are even possible, let alone useful; but it seems clear there’s ‘plenty of room at the top’, and that a recursively self-improving intelligence could program and then use such modules. Not to go Vernor Vinge on you, but if a few patched-together modules made the difference between chimp and human technology, then a few deliberately engineered ones could leap to what we consider absurd levels of control over the physical world.
I agree there is plenty of the room at the top. The question is how we get their.
I avoided the type of scenarios you describe because we don’t understand them, We can’t quantify them and be sure there will be positive feedback loops.
You have to quantify your uncertainty, at least. I consider it highly likely that there are many novel cognitive modules that an intelligence not far beyond human could envision and construct, that would never even occur to us. But not even this is required.
It seems to me implausible that the cognitive framework we humans have is anywhere near optimal, especially given the difficult mind-hacks it takes us to think clearly about basic problems that aren’t hardwired in. Some really hard problems are solved in blinding speed without conscious awareness, while some very simple problems just don’t have a cognitive organ standing ready, and so need to be (very badly) emulated by verbal areas of the brain. (If we did the mental arithmetic for Bayesian updating the way we do visual processing— or if more complicated hypotheses felt more unlikely— we’d have had spaceflight in 50,000 BC.) We’re cobbled together one codon-change at a time, with old areas of the brain crudely repurposed as culture outstrips genetic change. Thus an AI with cognitive architecture on our level, but able to reprogram itself, would have ample room to become much, much smarter than us, even without going into cognitive realms we can’t yet imagine— simply by integrating modules we can already construct externally, like calculators, into its reasoning and decision processes. Even this, without the further engineering of novel cognitive architecture, looks sufficient for a FOOM relative to human intelligence.