That’s not especially important for my argument, because I treat “intelligence” as “the ability to do AI research and program AIs”. (Could I have made that more clear?)
I think you’re failing to account for how dramatically a relatively slight difference in intelligence within such a metric is liable to compound itself. A single really intelligent human can come up with insights in seconds that a thousand dimwitted humans can’t come with in hours. Even within human scales, you can get intelligence differences that mean the difference between problems being insurmountable and trivial. In the grand scheme of things, an average human may have most of the intelligence that a brilliant one does, but that doesn’t mean that they’ll be able to do intellectual work at nearly the same rate, or even that they’d ever be able to accomplish what the brilliant one does. To suppose that the work of a self modifying AI and the human community would compound on a comparable timescale, I think presupposes that the advancement of the AI would remain within an extremely narrow window.
I think you’re failing to account for how dramatically a relatively slight difference in intelligence within such a metric is liable to compound itself. A single really intelligent human can come up with insights in seconds that a thousand dimwitted humans can’t come with in hours.
Well, by definition, their intelligence varies wildly according to the metric of making important discoveries. So surely you mean a relatively small difference in human biology. And this fact, while interesting, doesn’t obviously say (to me) that the smart people have some kind of killer algorithm that the less intelligent folks lack… which is the only means by which an AGI could compound its intelligence. It just says that small biological variations create large intelligence variations.
Well, there certainly don’t seem to be major hardware differences between smart and not so smart humans. But it wouldn’t take a strong AI access to a lot of resources before it would be in a position to start acquiring more hardware and computing resources.
I think you’re failing to account for how dramatically a relatively slight difference in intelligence within such a metric is liable to compound itself. A single really intelligent human can come up with insights in seconds that a thousand dimwitted humans can’t come with in hours. Even within human scales, you can get intelligence differences that mean the difference between problems being insurmountable and trivial. In the grand scheme of things, an average human may have most of the intelligence that a brilliant one does, but that doesn’t mean that they’ll be able to do intellectual work at nearly the same rate, or even that they’d ever be able to accomplish what the brilliant one does. To suppose that the work of a self modifying AI and the human community would compound on a comparable timescale, I think presupposes that the advancement of the AI would remain within an extremely narrow window.
Well, by definition, their intelligence varies wildly according to the metric of making important discoveries. So surely you mean a relatively small difference in human biology. And this fact, while interesting, doesn’t obviously say (to me) that the smart people have some kind of killer algorithm that the less intelligent folks lack… which is the only means by which an AGI could compound its intelligence. It just says that small biological variations create large intelligence variations.
Well, there certainly don’t seem to be major hardware differences between smart and not so smart humans. But it wouldn’t take a strong AI access to a lot of resources before it would be in a position to start acquiring more hardware and computing resources.