You say it like it’s a mathematical theorem or an experimentally tested physical model.
Yes, that’s quite irritating. If it was just me, I would acknowledge my ignorance. But even people like Shane Legg are less certain about the possibility of an intelligence explosion than people associated with SI. Which makes me wonder what it is that they know and he doesn’t. Shane Legg writes:
How fast would that then proceed? Could be very fast, could be impossible—there could be non-linear complexity constrains meaning that even theoretically optimal algorithms experience strongly diminishing intelligence returns for additional compute power. We just don’t know.
An intelligence explosion is a possibility. But some people here seem to think it is almost a certainty. That’s just weird.
Yes, that’s quite irritating. If it was just me, I would acknowledge my ignorance. But even people like Shane Legg are less certain about the possibility of an intelligence explosion than people associated with SI. Which makes me wonder what it is that they know and he doesn’t. Shane Legg writes:
An intelligence explosion is a possibility. But some people here seem to think it is almost a certainty. That’s just weird.
Peter Corning might be the person to ask about this question. He’s studied synergy in evolution.
The idea of a Tower of optimisation seems relevant here. Daniel Dennett, Alan Winfield and myself have worked on this.
I think this illustrates the main perspective which is needed to be able to make predictions on this topic.
“Tower of optimisation” may just be the coolest-sounding theory I’ve heard this month.
The wording of that sentence confused me enough to make me think about the subject for these past few hours. Good job, I guess. ^^