Assuming you had a respectable metric for it, I wouldn’t expect general intelligence to improve exponentially forever, but that doesn’t mean it can’t follow something like a logistic curve, with us near the bottom and some omega point at the top. That’s still a singularity from where we’re sitting. If something is a million times smarter than I am, I’m not sure it matters to me that it’s not a billion times smarter.
Sure! I argue that we just don’t know whether such a thing as “much more intelligent than humans” can exist. Millions of years of monkey evolution have increased human IQ to the 50-200 range. Perhaps that can go 1000x, perhaps it would level of at 210. The AGI concept makes the assumption that it can go to a big number, which might be wrong.
We don’t know, true. But given the possible space of limiting parameters it seems unlikely that humans are anywhere near the limits. We’re evolved systems, evolved under conditions in which intelligence was far from the most important priority.
And of course under the usual evolutionary constraints (suboptimal lock-ins like backward wired photoreceptors in the retina, the usual limited range of biological materials—nothing like transistors or macro scale wheels, etc.).
And by all reports John von Neumann was barely within the “human” range, yet seemed pretty stable. He came remarkably close to taking over the world, despite there being only one of him and not putting any effort into it.
I think all you’re saying is there’s a small chance it’s not possible.
Assuming you had a respectable metric for it, I wouldn’t expect general intelligence to improve exponentially forever, but that doesn’t mean it can’t follow something like a logistic curve, with us near the bottom and some omega point at the top. That’s still a singularity from where we’re sitting. If something is a million times smarter than I am, I’m not sure it matters to me that it’s not a billion times smarter.
Sure! I argue that we just don’t know whether such a thing as “much more intelligent than humans” can exist. Millions of years of monkey evolution have increased human IQ to the 50-200 range. Perhaps that can go 1000x, perhaps it would level of at 210. The AGI concept makes the assumption that it can go to a big number, which might be wrong.
We don’t know, true. But given the possible space of limiting parameters it seems unlikely that humans are anywhere near the limits. We’re evolved systems, evolved under conditions in which intelligence was far from the most important priority.
And of course under the usual evolutionary constraints (suboptimal lock-ins like backward wired photoreceptors in the retina, the usual limited range of biological materials—nothing like transistors or macro scale wheels, etc.).
And by all reports John von Neumann was barely within the “human” range, yet seemed pretty stable. He came remarkably close to taking over the world, despite there being only one of him and not putting any effort into it.
I think all you’re saying is there’s a small chance it’s not possible.
How did von Neumann come close to taking over the world? Perhaps Hitler, but von Neumann?
Even without having a higher IQ than a peak human, an AGI that merely ran 1000x faster would be transformative.