I think I understand the implication you’re invisibly asserting, and will try to outline it:
If there cannot be Strong AI, then there is an intelligence maximum somewhere along the scale of possible intelligence levels, which is sufficiently low that an AI which appears to us to be Strong would violate the maximum.
There is no reason a priori for this limit to be above human normal but close to it.
Therefore, the proposition “either the intelligence maximum is far above human levels or it is below human levels” has probability ~1. (Treating lack of maximum as ‘farthest above’.)
Therefore, if Strong AI was impossible, we wouldn’t be possible either.
This is true in the abstract, but doesn’t deal with a) possibility of restricted simulation (Taking Vinge’s Zones of Thought as a model.) or b) anthropic arguments as mentioned elsewhere. There could be nonrandom reasons for the placing of an arbitrary intelligence maximum.
Then you wouldn’t exist. Next question?
I presume this is downvoted due to some inferential gap… How does one get from no AGI to no humans? Or, conversely, why humans implies AGI?
I hope they all downvoted it because the OP asked about a story idea without calling it plausible in our world.
I downvoted mainly because Eliezer is being rude. Dude didn’t even link http://lesswrong.com/lw/ql/my_childhood_role_model/ or anything.
I think I understand the implication you’re invisibly asserting, and will try to outline it:
If there cannot be Strong AI, then there is an intelligence maximum somewhere along the scale of possible intelligence levels, which is sufficiently low that an AI which appears to us to be Strong would violate the maximum.
There is no reason a priori for this limit to be above human normal but close to it.
Therefore, the proposition “either the intelligence maximum is far above human levels or it is below human levels” has probability ~1. (Treating lack of maximum as ‘farthest above’.)
Therefore, if Strong AI was impossible, we wouldn’t be possible either.
This is true in the abstract, but doesn’t deal with a) possibility of restricted simulation (Taking Vinge’s Zones of Thought as a model.) or b) anthropic arguments as mentioned elsewhere. There could be nonrandom reasons for the placing of an arbitrary intelligence maximum.