Have you read the first two chapters of Thomas Schelling’s 1966 Arms and Influence? It’s around 50 pages.
The general gist is that if a lot of powerful Americans in the DoD take something seriously, such as preventing nuclear war, then foreign intelligence agencies will be able to hold that thing hostage in order to squeeze policy concessions out of the US.
It’s a lot more complicated than that, since miscommunication, corruption, compartmentalization, and infighting all muddy the waters of what things are valued by any given military.
This does seem like an important issue to consider, but my guess is it probably shouldn’t be a crux for answering OP’s question (or at least, further explanation is needed for why it might be)? Putting aside concerns about flawed pursuit of a given goal, it would be surprising if the benefits of caring about a goal were outweighed by second order harms from competitors extracting concessions.
But it doesn’t seem sufficient to settle the issue. A world where aligning/slowing AI is a major US priority, which China sometimes supports in exchange for policy concessions sounds like a massive improvement over today’s world
The theory of impact here is that there’s a lot of policy actions to slow down AI, but they’re bottlenecked on legitimacy. The US military could provide legitimacy
They might also help alignment, if the right person is in charge and has a lot of resources. But even if 100% their alignment research is noise that doesn’t advance the field, military involvement could be a huge net positive
So the real question is:
Is the theory of impact plausible
Are their big risks that mean this does more harm than good
I don’t know about “providing legitimacy”, that’s like spending a trillion dollars in order to procure one single gold toilet seat. Gold toilet seats are great, due to the human signalling-based psychology, but it’s not worth the trillion dollars. The military is not built to be easy to steer, that would be a massive vulnerability to foreign intelligence agencies.
My model of “steering” the military is a little different from that
It’s over a thousand partially autonomous headquarters, which each have their own interests. The right hand usually doesn’t know what the left is doing
Of the thousand+ headquarters, there’s probably 10 that have the necessary legitimacy and can get the necessary resources. Winning over any one of the 10 is a sufficient condition to getting the results I described above
In other words, you don’t have to steer the whole ship. Just a small part of it. I bet that can be done in 6 months
Have you read the first two chapters of Thomas Schelling’s 1966 Arms and Influence? It’s around 50 pages.
The general gist is that if a lot of powerful Americans in the DoD take something seriously, such as preventing nuclear war, then foreign intelligence agencies will be able to hold that thing hostage in order to squeeze policy concessions out of the US.
It’s a lot more complicated than that, since miscommunication, corruption, compartmentalization, and infighting all muddy the waters of what things are valued by any given military.
This does seem like an important issue to consider, but my guess is it probably shouldn’t be a crux for answering OP’s question (or at least, further explanation is needed for why it might be)? Putting aside concerns about flawed pursuit of a given goal, it would be surprising if the benefits of caring about a goal were outweighed by second order harms from competitors extracting concessions.
I bet that’s true
But it doesn’t seem sufficient to settle the issue. A world where aligning/slowing AI is a major US priority, which China sometimes supports in exchange for policy concessions sounds like a massive improvement over today’s world
The theory of impact here is that there’s a lot of policy actions to slow down AI, but they’re bottlenecked on legitimacy. The US military could provide legitimacy
They might also help alignment, if the right person is in charge and has a lot of resources. But even if 100% their alignment research is noise that doesn’t advance the field, military involvement could be a huge net positive
So the real question is:
Is the theory of impact plausible
Are their big risks that mean this does more harm than good
I don’t know about “providing legitimacy”, that’s like spending a trillion dollars in order to procure one single gold toilet seat. Gold toilet seats are great, due to the human signalling-based psychology, but it’s not worth the trillion dollars. The military is not built to be easy to steer, that would be a massive vulnerability to foreign intelligence agencies.
My model of “steering” the military is a little different from that It’s over a thousand partially autonomous headquarters, which each have their own interests. The right hand usually doesn’t know what the left is doing
Of the thousand+ headquarters, there’s probably 10 that have the necessary legitimacy and can get the necessary resources. Winning over any one of the 10 is a sufficient condition to getting the results I described above
In other words, you don’t have to steer the whole ship. Just a small part of it. I bet that can be done in 6 months