You believe that there is a strong evolutionary pressure to create powerful networks of individuals that are very good at protecting their interests and surviving in competition with other similar networks
You believe that these networks utilize information warfare to such an extent that they have adapted by cutting themselves off from most information channels, and are extremely skeptical of what anyone else believes
You believe that this policy is a better adaptation to this environment than what anyone else could come up with
These networks have adapted by being so extremely secretive that it’s virtually impossible to know anything about them
You happen to know that these networks have certain (self-perceived) interests related to AI
You happen to believe that these networks are dangerous forces and it makes sense to be scared
This image that you have of these networks leads to anxiety
Anxiety leads to you choosing and promoting a strategy of self-deterrence
Self deterrence leads to these networks having their (self-perceived) interests protected at no cost on their behalf
Given the above premises (which, for the record, I don’t share), you have to conclude that there’s a reasonable chance that your own theory is an active information battleground.
My model actually considers information warfare to have mostly become an issue recently (10-20 years) and that these institutions evolved before that. Mainly, information warfare is worth considering because
1) it is highly relevant to AI governance, as no matter what your model of government elites looks like, the modern information warfare environment strongly indicates that they will (at least initially) see the concept of a machine god as some sort of 21st-century-style ploy
2) although there are serious falsifiability problems that limit the expected value of researching potential high-competence decisionmaking and institutional structure within intelligence agencies, I’m arguing that the expected value is not very low because the evidence for incompetence is also weak (albeit less weak) and that evidence of incompetence all the way up is also an active information battleground (e.g. the news articles about Trump and the nuclear chain of command during the election dispute and jan 6th).
You believe that there is a strong evolutionary pressure to create powerful networks of individuals that are very good at protecting their interests and surviving in competition with other similar networks
You believe that these networks utilize information warfare to such an extent that they have adapted by cutting themselves off from most information channels, and are extremely skeptical of what anyone else believes
You believe that this policy is a better adaptation to this environment than what anyone else could come up with
These networks have adapted by being so extremely secretive that it’s virtually impossible to know anything about them
You happen to know that these networks have certain (self-perceived) interests related to AI
You happen to believe that these networks are dangerous forces and it makes sense to be scared
This image that you have of these networks leads to anxiety
Anxiety leads to you choosing and promoting a strategy of self-deterrence
Self deterrence leads to these networks having their (self-perceived) interests protected at no cost on their behalf
Given the above premises (which, for the record, I don’t share), you have to conclude that there’s a reasonable chance that your own theory is an active information battleground.
My model actually considers information warfare to have mostly become an issue recently (10-20 years) and that these institutions evolved before that. Mainly, information warfare is worth considering because
1) it is highly relevant to AI governance, as no matter what your model of government elites looks like, the modern information warfare environment strongly indicates that they will (at least initially) see the concept of a machine god as some sort of 21st-century-style ploy
2) although there are serious falsifiability problems that limit the expected value of researching potential high-competence decisionmaking and institutional structure within intelligence agencies, I’m arguing that the expected value is not very low because the evidence for incompetence is also weak (albeit less weak) and that evidence of incompetence all the way up is also an active information battleground (e.g. the news articles about Trump and the nuclear chain of command during the election dispute and jan 6th).