i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter—then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet.
Recursively, not necessarily exponentially. It may exploit the low hanging fruit early and improve somewhat slower once those are gone. Same conclusion applies—the threat is that it improves rapidly, not that it improves exponentially.
Good point, though if the AI’s intelligence grew linearly or as O(log T) or something, I doubt that it would be able to achieve the kind of speed that we’d need to worry about. But you’re right, the speed is what ultimately matters, not the growth curve as such.
Recursively, not necessarily exponentially. It may exploit the low hanging fruit early and improve somewhat slower once those are gone. Same conclusion applies—the threat is that it improves rapidly, not that it improves exponentially.
Good point, though if the AI’s intelligence grew linearly or as O(log T) or something, I doubt that it would be able to achieve the kind of speed that we’d need to worry about. But you’re right, the speed is what ultimately matters, not the growth curve as such.