The SIAI folks would say that your reasoning is exactly the kind of reasoning that leads to all of us being converted into computronium one day. More specifically, they would claim that, if you program an AI to improve itself recursively—i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter—then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet. It would go from “monkey” to “quasi-godlike” very quickly, potentially so quickly that you won’t even notice it happening.
FWIW, I personally am not convinced that this scenario is even possible, and I think that SIAI’s worries are way overblown, but that’s just my personal opinion.
i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter—then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet.
Recursively, not necessarily exponentially. It may exploit the low hanging fruit early and improve somewhat slower once those are gone. Same conclusion applies—the threat is that it improves rapidly, not that it improves exponentially.
Good point, though if the AI’s intelligence grew linearly or as O(log T) or something, I doubt that it would be able to achieve the kind of speed that we’d need to worry about. But you’re right, the speed is what ultimately matters, not the growth curve as such.
Human level intelligence is unable to improve itself at the moment (it’s not even able to recreate itself if we exclude reproduction). I don’t think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.
Uh… I think the fact that humans aren’t cognitively self-modifying (yet!) doesn’t have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don’t really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.
Isn’t it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don’t know how to program? What exactly do you mean by “we were not designed explicitly to be self-modifying”?
My understanding was that in your comment you basically said that our current inability to modify ourselves is evidence that an AGI of human-level intelligence would likewise be unable to self-modify.
By my understanding, learning is basically when a program collects the data it uses itself through interaction with some external system. Self-modification, on the other hand, is when the program has direct read/write acces to its own source code, so it can modify its own decision-making algorithm directly, not just the data set its algorithm uses.
This seems to presume a crisp distinction between code and data, yes? That distinction is not always so crisp. Code fragments can serve as data, for example. But, sure, it’s reasonable to say a system is learning but not self-modifying if the system does preserve such a crisp distinction and its code hasn’t changed.
The SIAI folks would say that your reasoning is exactly the kind of reasoning that leads to all of us being converted into computronium one day. More specifically, they would claim that, if you program an AI to improve itself recursively—i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter—then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet. It would go from “monkey” to “quasi-godlike” very quickly, potentially so quickly that you won’t even notice it happening.
FWIW, I personally am not convinced that this scenario is even possible, and I think that SIAI’s worries are way overblown, but that’s just my personal opinion.
Recursively, not necessarily exponentially. It may exploit the low hanging fruit early and improve somewhat slower once those are gone. Same conclusion applies—the threat is that it improves rapidly, not that it improves exponentially.
Good point, though if the AI’s intelligence grew linearly or as O(log T) or something, I doubt that it would be able to achieve the kind of speed that we’d need to worry about. But you’re right, the speed is what ultimately matters, not the growth curve as such.
Human level intelligence is unable to improve itself at the moment (it’s not even able to recreate itself if we exclude reproduction). I don’t think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.
Uh… I think the fact that humans aren’t cognitively self-modifying (yet!) doesn’t have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don’t really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.
Isn’t it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don’t know how to program? What exactly do you mean by “we were not designed explicitly to be self-modifying”?
My understanding was that in your comment you basically said that our current inability to modify ourselves is evidence that an AGI of human-level intelligence would likewise be unable to self-modify.
This is a really stupid question, but I don’t grok the distinction between ‘learning’ and ‘self-modification’ - do you get it?
By my understanding, learning is basically when a program collects the data it uses itself through interaction with some external system. Self-modification, on the other hand, is when the program has direct read/write acces to its own source code, so it can modify its own decision-making algorithm directly, not just the data set its algorithm uses.
This seems to presume a crisp distinction between code and data, yes?
That distinction is not always so crisp. Code fragments can serve as data, for example.
But, sure, it’s reasonable to say a system is learning but not self-modifying if the system does preserve such a crisp distinction and its code hasn’t changed.