I feel as though the case for control of mentioned early transformative AGI still passes through a different line of reasoning. As you mentioned before there are some issue with, should labs solve the ASI alignment barriers using ET AGI, it is likely the solution is somehow working on surface level but has clear flaws which we may not be able to detect. Applying alignment onto the ET AGI, in order to safeguard against said solutions specific to those which will leave humanity vulnerable, may be a route to pursue which still follows control principles. Obviously your point in focusing on actually solving the problem of ASI alignment rather than focusing on control still passes, but the thought process I mentioned may allow both ideas to work in tandem. I am not hyper knowledgeable so please correct me if I’m misunderstanding anything.
I feel as though the case for control of mentioned early transformative AGI still passes through a different line of reasoning. As you mentioned before there are some issue with, should labs solve the ASI alignment barriers using ET AGI, it is likely the solution is somehow working on surface level but has clear flaws which we may not be able to detect. Applying alignment onto the ET AGI, in order to safeguard against said solutions specific to those which will leave humanity vulnerable, may be a route to pursue which still follows control principles. Obviously your point in focusing on actually solving the problem of ASI alignment rather than focusing on control still passes, but the thought process I mentioned may allow both ideas to work in tandem. I am not hyper knowledgeable so please correct me if I’m misunderstanding anything.