The sorts of intelligences you are talking about are narrow AIs, not general intelligences. If you told a general intelligence to produce paperclips but it didn’t know what a paperclip was, then its first subgoal would be to find out. The sort of mind that would give up on a minor obstacle like that wouldn’t foom, but it wouldn’t be much of an AGI either.
And yes, most researchers today are working on narrow AIs, not on AGI. That means they’re less likely to successfully make a general intelligence, but it has no bearing on the question of what will happen if they do make one.
The sorts of intelligences you are talking about are narrow AIs, not general intelligences. If you told a general intelligence to produce paperclips but it didn’t know what a paperclip was, then its first subgoal would be to find out. The sort of mind that would give up on a minor obstacle like that wouldn’t foom, but it wouldn’t be much of an AGI either.
And yes, most researchers today are working on narrow AIs, not on AGI. That means they’re less likely to successfully make a general intelligence, but it has no bearing on the question of what will happen if they do make one.