For something to be a theorem, it has to be based on a sound set of axioms. So, I guess, the first question to ask is “What set of axioms can be useful for constructing a model of an emergent strong AI?” I suspect that some of the deep minds think about issues like that for a living, though I don’t personally know if there is anything formal like that. It would be a very interesting development, though. Maybe it would let us formalize some of the ideas you are listing, as well as many others. There is certainly some work being published by MIRI and others, but at this point I don’t think it is at the level required for anything like a no-go theorem, which is what you are asking for.
For something to be a theorem, it has to be based on a sound set of axioms. So, I guess, the first question to ask is “What set of axioms can be useful for constructing a model of an emergent strong AI?” I suspect that some of the deep minds think about issues like that for a living, though I don’t personally know if there is anything formal like that. It would be a very interesting development, though. Maybe it would let us formalize some of the ideas you are listing, as well as many others. There is certainly some work being published by MIRI and others, but at this point I don’t think it is at the level required for anything like a no-go theorem, which is what you are asking for.