Intelligence explosion follows from physicalism and scientific progress and not much else.
You say it like it’s a mathematical theorem or an experimentally tested physical model. In fact, there are plenty of caveats in the former approach, and there is a falsifying evidence in the latter (no recognizable super-intelligence on Earth or in space, billions of years after the Big Bang). Imagine that the Moore’s law fizzles in the next few years, where will your intelligence explosion be? Or maybe there is a law that states that stupidity grows faster than intelligence, resulting in the world overrun by idiots, killing further progress.
You say it like it’s a mathematical theorem or an experimentally tested physical model.
Yes, that’s quite irritating. If it was just me, I would acknowledge my ignorance. But even people like Shane Legg are less certain about the possibility of an intelligence explosion than people associated with SI. Which makes me wonder what it is that they know and he doesn’t. Shane Legg writes:
How fast would that then proceed? Could be very fast, could be impossible—there could be non-linear complexity constrains meaning that even theoretically optimal algorithms experience strongly diminishing intelligence returns for additional compute power. We just don’t know.
An intelligence explosion is a possibility. But some people here seem to think it is almost a certainty. That’s just weird.
You say it like it’s a mathematical theorem or an experimentally tested physical model. In fact, there are plenty of caveats in the former approach, and there is a falsifying evidence in the latter (no recognizable super-intelligence on Earth or in space, billions of years after the Big Bang). Imagine that the Moore’s law fizzles in the next few years, where will your intelligence explosion be? Or maybe there is a law that states that stupidity grows faster than intelligence, resulting in the world overrun by idiots, killing further progress.
Yes, that’s quite irritating. If it was just me, I would acknowledge my ignorance. But even people like Shane Legg are less certain about the possibility of an intelligence explosion than people associated with SI. Which makes me wonder what it is that they know and he doesn’t. Shane Legg writes:
An intelligence explosion is a possibility. But some people here seem to think it is almost a certainty. That’s just weird.
Peter Corning might be the person to ask about this question. He’s studied synergy in evolution.
The idea of a Tower of optimisation seems relevant here. Daniel Dennett, Alan Winfield and myself have worked on this.
I think this illustrates the main perspective which is needed to be able to make predictions on this topic.
“Tower of optimisation” may just be the coolest-sounding theory I’ve heard this month.
The wording of that sentence confused me enough to make me think about the subject for these past few hours. Good job, I guess. ^^