INTERLUDE: This point, by the way, is where people’s intuition usually begins rebelling, either due to our brains’ excessive confidence in themselves, or because we’ve seen too many stories in which some indefinable “human” characteristic is still somehow superior to the cold, unfeeling, uncreative Machine… i.e., we don’t understand just how our intuition and creativity are actually cheap hacks to work around our relatively low processing power—dumb brute force is already “smarter” than human beings in any narrow domain (see Deep Blue, evolutionary algorithms for antenna design, Emily Howell, etc.), and a human-level AGI can reasonably be assumed capable of programming up narrow-domain brute forcers for any given narrow domain.
No, the reason that people disagree at this point is that it’s not obvious that future rounds of recursive self-improvement will be as effective as the first, or even that the first round will be that effective.
Obviously an AI would have large amounts of computational power, and probably be able to think much more quickly than a human. Most likely it would be more intelligent than any human on the planet by a considerable margin. But this doesn’t imply
No, the reason that people disagree at this point is that it’s not obvious that future rounds of recursive self-improvement will be as effective as the first, or even that the first round will be that effective.
Obviously an AI would have large amounts of computational power, and probably be able to think much more quickly than a human. Most likely it would be more intelligent than any human on the planet by a considerable margin. But this doesn’t imply