I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.
The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.
I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.
The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.