But it doesn’t seem sufficient to settle the issue. A world where aligning/slowing AI is a major US priority, which China sometimes supports in exchange for policy concessions sounds like a massive improvement over today’s world
The theory of impact here is that there’s a lot of policy actions to slow down AI, but they’re bottlenecked on legitimacy. The US military could provide legitimacy
They might also help alignment, if the right person is in charge and has a lot of resources. But even if 100% their alignment research is noise that doesn’t advance the field, military involvement could be a huge net positive
So the real question is:
Is the theory of impact plausible
Are their big risks that mean this does more harm than good
I don’t know about “providing legitimacy”, that’s like spending a trillion dollars in order to procure one single gold toilet seat. Gold toilet seats are great, due to the human signalling-based psychology, but it’s not worth the trillion dollars. The military is not built to be easy to steer, that would be a massive vulnerability to foreign intelligence agencies.
My model of “steering” the military is a little different from that
It’s over a thousand partially autonomous headquarters, which each have their own interests. The right hand usually doesn’t know what the left is doing
Of the thousand+ headquarters, there’s probably 10 that have the necessary legitimacy and can get the necessary resources. Winning over any one of the 10 is a sufficient condition to getting the results I described above
In other words, you don’t have to steer the whole ship. Just a small part of it. I bet that can be done in 6 months
I bet that’s true
But it doesn’t seem sufficient to settle the issue. A world where aligning/slowing AI is a major US priority, which China sometimes supports in exchange for policy concessions sounds like a massive improvement over today’s world
The theory of impact here is that there’s a lot of policy actions to slow down AI, but they’re bottlenecked on legitimacy. The US military could provide legitimacy
They might also help alignment, if the right person is in charge and has a lot of resources. But even if 100% their alignment research is noise that doesn’t advance the field, military involvement could be a huge net positive
So the real question is:
Is the theory of impact plausible
Are their big risks that mean this does more harm than good
I don’t know about “providing legitimacy”, that’s like spending a trillion dollars in order to procure one single gold toilet seat. Gold toilet seats are great, due to the human signalling-based psychology, but it’s not worth the trillion dollars. The military is not built to be easy to steer, that would be a massive vulnerability to foreign intelligence agencies.
My model of “steering” the military is a little different from that It’s over a thousand partially autonomous headquarters, which each have their own interests. The right hand usually doesn’t know what the left is doing
Of the thousand+ headquarters, there’s probably 10 that have the necessary legitimacy and can get the necessary resources. Winning over any one of the 10 is a sufficient condition to getting the results I described above
In other words, you don’t have to steer the whole ship. Just a small part of it. I bet that can be done in 6 months