I think these kinds of considerations should be part of predicting AI. A few posts on related topics, in case you are interested, only the first of which I necessarily remember the details of well enough to endorse:
http://meteuphoric.wordpress.com/2010/11/05/light-cone-eating-ai-explosions-are-not-filters/ http://lesswrong.com/lw/g1s/ufai_cannot_be_the_great_filter/ http://lesswrong.com/lw/kvm/the_great_filter_is_early_or_ai_is_hard/
I think these kinds of considerations should be part of predicting AI. A few posts on related topics, in case you are interested, only the first of which I necessarily remember the details of well enough to endorse:
http://meteuphoric.wordpress.com/2010/11/05/light-cone-eating-ai-explosions-are-not-filters/ http://lesswrong.com/lw/g1s/ufai_cannot_be_the_great_filter/ http://lesswrong.com/lw/kvm/the_great_filter_is_early_or_ai_is_hard/