It would be trivial for even a Watson-level AI, specialized to the task, to hack into pretty much every existing computer system; almost all software is full of holes and is routinely hacked by bacterium-complexity viruses
Did I suggest otherwise?
Large sections of the economy are already being monopolized by AI (Google is the most obvious example)
Interesting point. As I wrote, I think that an AGI monopolizing larger and larger sections of the economy is a strong possibility.
(Feel free to read things before commenting on them!)
...
“The world’s AI researchers” aren’t remotely close to a single entity working towards a single goal; a human (appropriately trained) is much more like that than Apple, which is much more like than than the US government, which is much more like that than a nebulous cluster of people who sometimes kinda know each other
I agree there are important differences. Why do you feel they’re important for my argument? Quantifying ability to make AI progress with a single number is indeed a coarse approximation, but coarse approximations are all we have.
Human abilities and AI abilities are not “equivalent”, even if their medians are the same AIs will be much stronger in some areas (eg. arithmetic, to pick an obvious one); AIs have no particular need for our level of visual modeling or face recognition, but will have other strengths, both obvious and not
That’s not especially important for my argument, because I treat “intelligence” as “the ability to do AI research and program AIs”. (Could I have made that more clear?)
There is already a huge body of literature, formal and informal, on when humans use System 1 vs. System 2 reasoning
Well, if you’re familiar with that literature, feel free to share whatever’s relevant ;)
A huge amount of progress has been made in compilers, in terms of designing languages that implement powerful features in reasonable amounts of computing time; just try taking any modern Python or Ruby or C++ program and porting it to Altair BASIC
If you want to know, over the course of thinking about this topic, I changed from leaning towards Yudkowsky’s position to leaning towards Hanson’s. Anyway, if you think you have useful things to say, it might be worth saying them for the sake of bystanders.
That’s not especially important for my argument, because I treat “intelligence” as “the ability to do AI research and program AIs”. (Could I have made that more clear?)
I think you’re failing to account for how dramatically a relatively slight difference in intelligence within such a metric is liable to compound itself. A single really intelligent human can come up with insights in seconds that a thousand dimwitted humans can’t come with in hours. Even within human scales, you can get intelligence differences that mean the difference between problems being insurmountable and trivial. In the grand scheme of things, an average human may have most of the intelligence that a brilliant one does, but that doesn’t mean that they’ll be able to do intellectual work at nearly the same rate, or even that they’d ever be able to accomplish what the brilliant one does. To suppose that the work of a self modifying AI and the human community would compound on a comparable timescale, I think presupposes that the advancement of the AI would remain within an extremely narrow window.
I think you’re failing to account for how dramatically a relatively slight difference in intelligence within such a metric is liable to compound itself. A single really intelligent human can come up with insights in seconds that a thousand dimwitted humans can’t come with in hours.
Well, by definition, their intelligence varies wildly according to the metric of making important discoveries. So surely you mean a relatively small difference in human biology. And this fact, while interesting, doesn’t obviously say (to me) that the smart people have some kind of killer algorithm that the less intelligent folks lack… which is the only means by which an AGI could compound its intelligence. It just says that small biological variations create large intelligence variations.
Well, there certainly don’t seem to be major hardware differences between smart and not so smart humans. But it wouldn’t take a strong AI access to a lot of resources before it would be in a position to start acquiring more hardware and computing resources.
Did I suggest otherwise?
Interesting point. As I wrote, I think that an AGI monopolizing larger and larger sections of the economy is a strong possibility.
(Feel free to read things before commenting on them!)
...
I agree there are important differences. Why do you feel they’re important for my argument? Quantifying ability to make AI progress with a single number is indeed a coarse approximation, but coarse approximations are all we have.
That’s not especially important for my argument, because I treat “intelligence” as “the ability to do AI research and program AIs”. (Could I have made that more clear?)
Well, if you’re familiar with that literature, feel free to share whatever’s relevant ;)
Good point.
If you want to know, over the course of thinking about this topic, I changed from leaning towards Yudkowsky’s position to leaning towards Hanson’s. Anyway, if you think you have useful things to say, it might be worth saying them for the sake of bystanders.
I think you’re failing to account for how dramatically a relatively slight difference in intelligence within such a metric is liable to compound itself. A single really intelligent human can come up with insights in seconds that a thousand dimwitted humans can’t come with in hours. Even within human scales, you can get intelligence differences that mean the difference between problems being insurmountable and trivial. In the grand scheme of things, an average human may have most of the intelligence that a brilliant one does, but that doesn’t mean that they’ll be able to do intellectual work at nearly the same rate, or even that they’d ever be able to accomplish what the brilliant one does. To suppose that the work of a self modifying AI and the human community would compound on a comparable timescale, I think presupposes that the advancement of the AI would remain within an extremely narrow window.
Well, by definition, their intelligence varies wildly according to the metric of making important discoveries. So surely you mean a relatively small difference in human biology. And this fact, while interesting, doesn’t obviously say (to me) that the smart people have some kind of killer algorithm that the less intelligent folks lack… which is the only means by which an AGI could compound its intelligence. It just says that small biological variations create large intelligence variations.
Well, there certainly don’t seem to be major hardware differences between smart and not so smart humans. But it wouldn’t take a strong AI access to a lot of resources before it would be in a position to start acquiring more hardware and computing resources.