Overall, I thought the case in this chapter for faster over slower takeoffs was weak.
The main considerations pointing to low recalcitrance for AI seem to be the possibility of hardware and content overhangs—though these both seem likely to have been used up already, and are anyway one-off events—and the prospect of hardware growth in general continuing to be fast, which seems reasonable but doesn’t distinguish AI from any other software project.
So the argument for fast growth has to be based on optimization power increasing a huge amount, as I think Bostrom intends. The argument for that is that first people will become interested in the project, causing it to grow large enough to drive most of its own improvement, and then it will recursively self-improve to superintelligence.
I agree that optimization power applied to the problem will increase if people become very interested in it. However saying optimization power will increase seems very far from saying the project will grow large enough to rival the world in producing useful inputs for it, which seems an ambitious goal. Optimization power applied to different projects increases all the time and this doesn’t happen.
It would have been good to have a quantitative sense of the scale on which the optimization power is anticipated to grow, and of where the crossover is thought to be. One of these two has to be fairly unusual for a project to take over the world it seems, yet the arguments here are too qualitative to infer that anything particularly unusual would happen, or to distinguish AI from other technologies.
I don’t mean to claim that the claim is false—that would be a longer discussion—just that the case here seems insufficient.
We might have a takeoff that does not involve a transition to what we’d call general intelligence. All the AI needs is an ability to optimize very well in one area that is critical to human civilization—warfare, financial trading, etc—and the ability to outwit humans who try to stop it.
There are ways that the AI could prevent humans from stopping it without the full smartness/ trickiness/ cleverness that we imagine when we talk about general intelligence.
Although I want to avoid arguing about stories here, I should give an example, so imagine a stock-trading AI that has set up its investments in a way that ensures that stopping it would bring financial disaster to a wide variety of powerful financial entities, while letting it continue would produce financial benefit for them, at least to a certain point; and that this effect is designed to grow larger as the AI takes over the world economy. Or more simply, a military AI that sets up a super-bomb on a tripwire to protect itself: Nothing that needs 1000x-human-intelligence, just a variation of the tripwire-like systems and nuclear bombs that exist today. Both these could be part of the system’s predefined goals, not necessarily a convergent goal which the system discovers as described by Omohundro. Again, these are just meant to trigger the imagination about a super-powerful AI without general intelligence.
Overall, I thought the case in this chapter for faster over slower takeoffs was weak.
The main considerations pointing to low recalcitrance for AI seem to be the possibility of hardware and content overhangs—though these both seem likely to have been used up already, and are anyway one-off events—and the prospect of hardware growth in general continuing to be fast, which seems reasonable but doesn’t distinguish AI from any other software project.
So the argument for fast growth has to be based on optimization power increasing a huge amount, as I think Bostrom intends. The argument for that is that first people will become interested in the project, causing it to grow large enough to drive most of its own improvement, and then it will recursively self-improve to superintelligence.
I agree that optimization power applied to the problem will increase if people become very interested in it. However saying optimization power will increase seems very far from saying the project will grow large enough to rival the world in producing useful inputs for it, which seems an ambitious goal. Optimization power applied to different projects increases all the time and this doesn’t happen.
It would have been good to have a quantitative sense of the scale on which the optimization power is anticipated to grow, and of where the crossover is thought to be. One of these two has to be fairly unusual for a project to take over the world it seems, yet the arguments here are too qualitative to infer that anything particularly unusual would happen, or to distinguish AI from other technologies.
I don’t mean to claim that the claim is false—that would be a longer discussion—just that the case here seems insufficient.
We might have a takeoff that does not involve a transition to what we’d call general intelligence. All the AI needs is an ability to optimize very well in one area that is critical to human civilization—warfare, financial trading, etc—and the ability to outwit humans who try to stop it.
There are ways that the AI could prevent humans from stopping it without the full smartness/ trickiness/ cleverness that we imagine when we talk about general intelligence.
Although I want to avoid arguing about stories here, I should give an example, so imagine a stock-trading AI that has set up its investments in a way that ensures that stopping it would bring financial disaster to a wide variety of powerful financial entities, while letting it continue would produce financial benefit for them, at least to a certain point; and that this effect is designed to grow larger as the AI takes over the world economy. Or more simply, a military AI that sets up a super-bomb on a tripwire to protect itself: Nothing that needs 1000x-human-intelligence, just a variation of the tripwire-like systems and nuclear bombs that exist today. Both these could be part of the system’s predefined goals, not necessarily a convergent goal which the system discovers as described by Omohundro. Again, these are just meant to trigger the imagination about a super-powerful AI without general intelligence.