“Foom” has never seemed plausible to me. I’m admittedly not well-versed in the exact arguments used by proponents of foom, but I have roughly 3 broad areas of disagreement:
Foom rests on the idea that once any agent can create an agent smarter than itself, this will inevitably lead to a long chain of exponential intelligence improvements. But I don’t see why the optimization landscape of the “design an intelligence” problem should be this smooth. To the contrary, I’d expect there to be lots of local optima: architectures that scale to a certain level and then end up at a peak with nowhere to go. Humans are one example of an intelligence that doesn’t grow without bound.
Resource constraints are often hand-waved away. We can’t turn the world to computronium at the speed required for a foom scenario. We can’t even keep up with GPU demand for cryptocurrencies. Even if we assume unbounded computronium, large-scale actions in the physical world require lots of physical resources.
Intelligence isn’t all-powerful. This cuts in two directions. First, there are strategic settings where a relatively low level of intelligence already allows you to play optimally (e.g. tic-tac-toe). Second, there are problems for which no amount of intelligence will help, because the only way to solve them is to throw lots of raw computation at them. Our low intelligence makes it hard for us to identify such problems, but they definitely exist (as shown in every introductory theoretical computer science lecture).
“Foom” has never seemed plausible to me. I’m admittedly not well-versed in the exact arguments used by proponents of foom, but I have roughly 3 broad areas of disagreement:
Foom rests on the idea that once any agent can create an agent smarter than itself, this will inevitably lead to a long chain of exponential intelligence improvements. But I don’t see why the optimization landscape of the “design an intelligence” problem should be this smooth. To the contrary, I’d expect there to be lots of local optima: architectures that scale to a certain level and then end up at a peak with nowhere to go. Humans are one example of an intelligence that doesn’t grow without bound.
Resource constraints are often hand-waved away. We can’t turn the world to computronium at the speed required for a foom scenario. We can’t even keep up with GPU demand for cryptocurrencies. Even if we assume unbounded computronium, large-scale actions in the physical world require lots of physical resources.
Intelligence isn’t all-powerful. This cuts in two directions. First, there are strategic settings where a relatively low level of intelligence already allows you to play optimally (e.g. tic-tac-toe). Second, there are problems for which no amount of intelligence will help, because the only way to solve them is to throw lots of raw computation at them. Our low intelligence makes it hard for us to identify such problems, but they definitely exist (as shown in every introductory theoretical computer science lecture).