I suspect that part of what is going on is that many in the AI safety community are inexperienced with and uncomfortable with politics and have highly negative views about government capabilities.
Another potential (and related) issue, is that people in the AI safety community think that their comparative advantage doesn’t lie in political action (which is likely true) and therefore believe they are better off pursuing their comparative advantage (which is likely false).
The problem is twofold. One, as and to the extent AI proliferates, you will eventually find someone who is less capable and careful about their sandboxing. Two, relatedly and more importantly, for much the same reason that people will not be satisfied with AIs without agency, they will weaken the sandboxing.
The STEM AI proposal referred to above can be used to illustrate this. If you want the AGI to do theoretical math you don’t need to tell it anything about the world. If you want it to cure cancer, you need to give it a lot of information about physics, chemistry and mammalian biology. And if you want it to win the war or the election, then you have to tell it about human society and how it works. And, as it competes with others, whoever has more real time and complete data is likely to win.