I agree that one could have scenarios in which there are AI programs with humanlike capacities that are not yet capable of such development (e.g. a super-bloated system running on massive server farms). However, they tend to involve AI development happening very surprisingly quickly, and don’t seem stable for long (bloated implementations can be made more efficient, with strong positive feedback in the improvement, and superhuman hardware will come soon after powerful AI if not before).
I’m not sure how to interpret what you’re saying. You say:
they tend to involve AI development happening very surprisingly quickly
which sounds to me like a summary of long experience. But you also seem to be talking about a scenario which you cannot possibly have experienced even once. So, I’m not sure what you’re saying.
I’m saying that in my experience of people working out consistent scenarios that involve AI development with sustained scarcity, the scenarios offered usually involve the development of human-level AI early, before hardware can advance much further.
I’m not sure how to interpret what you’re saying. You say:
which sounds to me like a summary of long experience. But you also seem to be talking about a scenario which you cannot possibly have experienced even once. So, I’m not sure what you’re saying.
I’m saying that in my experience of people working out consistent scenarios that involve AI development with sustained scarcity, the scenarios offered usually involve the development of human-level AI early, before hardware can advance much further.