My initial objection to the Singularity was “how do we know making something more intelligent is not an exponential more difficult task, preventing the feedback loop from growing fast or even reaching a low limit ?”
The AI Foom debate did mostly answer to that objection, but I think addressing it in a FAQ about the Singularity would be a good idea.
My initial objection to the Singularity was “how do we know making something more intelligent is not an exponential more difficult task, preventing the feedback loop from growing fast or even reaching a low limit ?”
The AI Foom debate did mostly answer to that objection, but I think addressing it in a FAQ about the Singularity would be a good idea.