P1 should be that existential risk is already high. For example moving from 70 per cent to 80 per cent is not large increase. Nuclear risk used to be very serious during previous cold war.
The main problem of your argument is that it is not clear why intelligent agent should use nukes against humanity, as it will terminate the agent too. To address this I suggest adding “creating independent robotic infrastructure” as a necessary condition. I have a post about it.
Also, from you argument follows that merging of AI and US government is the most natural path to AI dominance. I think it is true. But it is not generally accepted.
Possible counterarguments:
It doesn’t increase the risk as agents with nuclear arsenals already exist?
Current US government exploited unique resources—land etc
Current US government will oppose a new similar to US government organization to appear
P1 should be that existential risk is already high. For example moving from 70 per cent to 80 per cent is not large increase. Nuclear risk used to be very serious during previous cold war.
The main problem of your argument is that it is not clear why intelligent agent should use nukes against humanity, as it will terminate the agent too. To address this I suggest adding “creating independent robotic infrastructure” as a necessary condition. I have a post about it.
Also, from you argument follows that merging of AI and US government is the most natural path to AI dominance. I think it is true. But it is not generally accepted.