Assessment of intelligence agency functionality is difficult yet important
Summary: When it comes to observing intelligence agencies, it’s hard to see the hardened parts and easy to observe the soft corrupt parts. This leads to a bias where very large numbers of people overestimate how prevalent the easily-observed soft and harmless parts are. This can sometimes even result in a dangerous and prevalent estimation, among people whose careers are much further ahead than yours, that the entire intelligence agency is harmless and irrelevant, when it actually isn’t. Intelligence agencies are probably a mix of both less-functional, less-relevant parts, and also more-functional, more-relevant parts that have a disproportionately large influence over governments and policies; and it is a mistake to assume that intelligence agencies are homogenously composed of non-functional non-relevant parts that aren’t worth paying any attention to, even if such a belief is a popular norm.
Why intelligence agencies are dangerous
There are a wide variety of situations where intelligence agencies suddenly becomes relevant, without warning. For example, most or all of the US Natsec establishment might suddenly and unanimously change its stance on Gain of Function research, such as if US-China relations or US-Russian relations once again hit a new 25-year low (which has actually been happening very frequently over the last few years).
Either the leadership of an agency, or a powerful individual in an agency with authority to execute operations, or a corrupt clique, might personally make a judgement that the best way to expedite or restart GOF research is to target various people who are the most efficient or effective at opposing GOF research.
This need not be anywhere near the most effective way to expedite or protect GOF research, it just needs to look like that, sufficiently for someone to sign off on that, or even for them to merely thing that it would look good to their boss.
Competent or technologically advanced capabilities can obviously be mixed with incompetent administration/decisionmaking in the mixed competence model of intelligence agencies. An intelligence agency that is truly harmless, irrelevant, and not worth paying attention to (as opposed to having an incentive to falsely give off the appearance of harmlessness, irrelevance, or not being worth paying attention to) would have to be an intelligence agency that is both technologically unsophisticated and too corrupt for basic functioning, such as running operations.
This would be an extremely naive belief to have about the intelligence agencies in the US, Russia, and China; particularly the US and China, which have broad prestige, sophisticated technology, and also thriving private sector skill pools to recruit talent from.
When calculating the expected value from policy advocacy tasks that someone somewhere absolutely must carry out, like pushing sensible policymaking on GOF research that could cause human extinction, many people are currently aware that the risk of that important community disappearing or dissolving substantially reduces the expected value calculations of everything produced by that important community; e.g. a 10% chance of the community ceasing to exist or dissolving reduces the expected value produced by that entire community by something like ~10%.
Most people I’ve encountered have in mind a massive totalitarian upheaval, like the ones in the early-mid 20th century, and such an upheaval is a hard boundary between being secure and not being secure. However, in the 21st century, especially after COVID and the 2008 recession, experts and military planners are now more focused on the international balance of power (e.g. the strength of the US, Russia, and China relative to each other and other independent states) being altered by economic collapse or alliance paralysis rather than revolutions or military conquest. This is because the entire world is roundaboutly different today from what it was 70 years ago.
It makes more sense to anticipate slower and incomplete backsliding, with results like shifts towards a hybrid regime in various ways, where abuses of power by intelligence agencies and internal security agencies are increasingly commonplace due to corruption, and a lack of accountability due to a broad priority placed on hybrid warfare, as well as preventing foreign adversaries like Russia and China from leveraging domestic elites such as billionaires, government officials, and celebrities/thought leaders who are relevant among key demographics (like Yann Lecun).
An example of an angle on this, from the top comment on Don’t Take the Organization Chart Literally:
...a lot of what goes on in government (and corrupt corporate orgs) is done with tacit power. Few DOJ, CIA, and FBI officers have a full picture of just how their work is misaligned with the interests of America. But most all of them have a general understanding that they are to be more loyal to the organization than they are to America.[1] Through his familial and otherwise corrupt connections, [Department of Justice leader] Barr is part of the in-group at the US corrupt apparatus. It can be as simple as most inferior officers knowing he’s with them.
So Barr doesn’t have to explicitly tell the guards to look the other way, he doesn’t have to tell the FBI to run a poor investigation, he doesn’t have to tell the DOJ to continue being corrupt … Lower-level bosses who have the full faith and confidence of their inferiors put small plans into place to get it done. It’s what the boss wants and the boss looks out for them.
Picture Musk’s possible purchase of Twitter. Do you think that if Musk bought Twitter, even as a private owner, he would suddenly have full control of the whole apparatus? Of course not. The people with real power would be his inferiors who have been there for a while and are part of the in-group. The only way for Musk to get a hold of Twitter would be to fire quite a lot of people, many who are integral to the organization.
It’s hard to see the hardened parts
(Note: this is a cleaned up version of a previous post, whose quality I wasn’t satisfied with. Feel free to skip this if you’ve already read it).
Some social structures can evolve that allow secrets to be kept with larger numbers of people. For example, intelligence agencies are not only compartmentalized, but the employees making them up all assume that if someone approaches them offering to buy secrets, that it’s probably one of the routine counterintelligence operation within the agency that draws out and prosecutes untrustworthy employees. As a result, the employees basically one-box their agency and virtually never accept bribes from foreign agents, no matter how ludicrously large the promised payout. And any that fall through the cracks are hard to disentangle from disinformation by double/triple agents posing as easily-bribed-people.
It’s much more complex than that, but that’s just one example of a secret-keeping system evolving inside institutions; effective enough not just to keep secrets, but also to thwart or misinform outside agents intelligently trying to rupture secret-keeping networks (emerging almost a hundred years ago or earlier).
The upper echelons of intelligence agencies are difficult to observe. It is not clear if the lack of output is caused primarily by incompetence and disinterest, or if the incentive dynamics inside such a powerful structure causes competent individuals to waste their capabilities on internal competition and eliminating their colleagues. However, it is dangerous to take the average lower- and mid-level government official/bureaucrat, who are easier to access and observe, and extrapolate that into difficult-to-observe higher echelons. The higher echelons might be substantially out-of-distribution; for example, in a thought experiment with the oversimplified Gervais model of a corporate hierarchy (the “sociopaths” are highly social and love potlucks; the “clueless” are a reservoir of deep organizational insights; and the “losers” live very happy lives, and the main thing they “lose” to is the same aging process as everyone else), an individual progressing up the pyramid would gradually discover a thanksgiving turkey effect: human being self-sort, resulting in encountering people who already successfully pursued wealth incentives at the top of the organization because they have unusual and qualitatively different combinations of personal traits than the more easily-observed people at the middle and bottom of the pyramid.
Although the libertarian school of thought is the most grounded in empirical observations of government being generally incompetent, this should not distract us from the fundamental principle that the top 20% of an org with 80% of the power is largely unknown territory due to difficulty of observation, and all sorts of strange top-specific dynamics may explain government’s failures; although models must be grounded in observations, it is still risky to overdepend on the libertarian school of thought, which largely takes low-level bureaucrats and imagines government as uniformly composed of them, extrapolating those individuals to the highest and most desired positions. Intelligence agencies have surely noticed that posing as an incompetent bureaucrat makes for excellent camouflage, and it’s also well known throughout government that mazes of paperwork deter predators.
The top performing altruists that make up EA, substantially fewer than 0.1% of all altruists globally, are at the extreme peak due to highly unusual and extreme circumstances, including substantial competence, luck, intelligence, motivation, and capacity to spontaneously organize in productive ways in order to achieve instrumentally convergent goals. Unlike EA, however, the top 0.1% of people at intelligence, military, and internal security agencies face incredible evolutionary optimization pressure from the threat of regime change, a wide variety of wealthy and powerful elites looking up at them, and continuous strategic infiltration assaults by foreign intelligence agencies. It is not at all clear what sorts of structures would end up evolving at the peak of power brokers in a democracy, and it is not epistemically responsible to automatically defer to the libertarian school of thought on this, even if the libertarian school of thought is correct about the countless people whose lives were ruined by incompetent government intervention/regulation. Competent people and groups still get sorted to the top where they face darwinistic pressures, even if a large majority of competent people bounce off of bureaucratic nonsense along the way. The operations of intelligence agencies are the results that we observe from those people being given incredible power, impunity, the ability to monopolize information, and to exploit power and information asymmetry between themselves and the large, technologically advanced private corporations that they share a country with (with corporate lobbyists available to facilitate and even cash-incentivize a wide variety of complex bargains between them and leading, notably including revolving door employment of top talent, which is further facilitated by the power and prestige of intelligence agencies).
It’s easy to see the soft parts
Intelligence agencies are capable of penetrating hardened bureaucracies and other organizations, moving laterally by compromising networks of people, and steering the careers of people in executive departments and legislative branches/parliaments around the world, likely including domestically.
People with relevant experience understand that moving upwards and laterally through a bureaucracy is a science (it is also many other things, most of them extremely unpleasant). Promoting and navigating through a bureaucracy is also a much more precise science in the minds of people who have advanced further than you, than it is in your mind; given that they were so successful, they have likely done many things right and learned many things along the way which you haven’t.
However, likewise, it is an even more precise science in the minds of the specialists at intelligence agencies, which have been specializing at systematically penetrating, controlling, and deceiving hardened parts of hardened bureaucracies (and other organizations) all over the world for generations (but only a handful of generations). Human civilization is built on a foundation of specialization and division of labor, and intelligence agencies are the people who specialized at doing that.[1]
This assymmetry of information is even greater due to the necessary dependence on anecdata, and yet further complicated by the phenomena where many people make decisions based off of vibes from their time working at a specific part of an agency.
This is notable, because the parts of an agency with high turnover, where a disproportionately large number of people enter and exit, thus occupying a disproportionately large share of observation and testimony. This further contributes to the dynamic where it is hard to see the hardened parts and easier to see the softer parts, since corruption, incompetence, thuggery/factionalism, and low-engagement each are known to increase turnover substantially, whereas high-value secrets, more relatively competent management, interesting work, and mission-oriented workers are known to have lower turnover and also more amenable to recruiting top talent from top companies.
Furthermore, there is also the risk of anti-inductive situations that come with the territory of evaluating organizations whose missions include a very long history of propaganda, disinformation, and particularly counterintelligence and using advanced technology to exploit human psychology (including through the use of data science, mass surveillance, and AI). Going off of vibes, in particular, is a very bad approach, because vibes are emotional, subconscious, and easy to get large amounts of data on and study scientifically. The better you understand something, the easier it is to find ways to get specific outcomes by poking that something with specific stimuli.
Dealing with hypothetical groups of rich and power people, who specifically use their wealth and influence to avoid giving away their positions to also-rich-and-powerful foes, requires understanding of human cognitive biases related to dealing with unfalsifiable theories. My model looks great, it’s a fun topic to play around with in your head, and the theory of hard-to-spot islands of competence-monopolization are an entirely different tier from flying spagetti monsters and invisible dragons; but these considerations also must be evaluated with a quantitative mindset. Ultimately, aside from policy outcomes and publicly-known military/intelligence outcomes, there is little good data, and both hypotheses (uniform incompetence vs non-uniform incompetence within intelligence agencies) must be handled with the best epistemology available. I recommend Yudkowsky’s Belief in belief, Religion’s claim to be non-disprovable, and An intuitive explanation of Bayes theorem (if you haven’t read it already), and also Raemon’s Dark Forest Theories. The constraints I’ve described in this post are critical for understanding intelligence agencies.
The study of these institutions warrants much better epistemics than what seems to have taken place so far.
Functioning lie detectors as a turning point in human history
All of human society and equilibria is derived in-part from a fundamental trait of the human brain: lies are easier for the human brain to generate than it is for the human brain to detect, even during in-person conversations where massive amounts of intensely revealing nonverbal communication is exchanged (e.g. facial expressions, subtle body posture changes). You cannot ask people if they are planning to betray you, everything would be different if you could.
If functioning lie detectors were to be invented, incentive structures as we know them would be completely replaced with new ones that are far more effective. E.g. you can just force all your subordinates to wear an EEG or go into an fMRI machine, and ask all of them who the smartest/most competent person in the office is, promote the people who are actually top performers, and fire any cliques/factions of people who you detect as coordinating around a common lie. Most middle managers with access to functioning lie detection technology would think of those things, and many other strategies that have not yet occurred to me, over the course of the thousands of hours they spend as middle managers with access to functioning lie detection technology.
If your immediate reflexive response to lie detection technology is “well, lie detection technology is currently incredibly inaccurate and ineffective”, then that’s a very understandable mistake, but also unambiguously a mistake. I’ve talked to many people about this, and almost all of them confidently output basically that exact string of text, yet had no idea where it came from or what was backing it up. I don’t really doubt that it was possibly true 40 or even 20 years ago, but with modern technology it’s much more of a toss-up. The best paper (that I’m willing to share) covering government/military interest and access to lie detection technology, either current or potential future monopolization, is here, which among many other things also covers the reputation of lie detection technology (which is one of the easier things to observe and study).
This is likely one of the most significant ways that the next 30 years of human civilization will be out-of-distribution relative to the last 80 years of human civilization (it is NOT #1).
Information I found helpful:
Don’t take the organizational chart literally (highly recommended)
LLMs will be great for censorship
Raemon’s Dark Forest Theories
Joseph Nye’s Soft Power
The US is becoming less stable
- ^
Parliaments and legislative bodies, on the other hand, are more about giving a country’s elites a legitimate and sustainable access to influence so that they have an outlet other than playing dirty (and there are a wide variety of ways for a country’s top elites to play dirty at/near the peak of wealth and power; try imagining what a 175 IQ person could get up to). Authoritarian regimes, unlike democracies, focus more on walling elites off. They are specialists in friendly things, like robustness and policymaking.
- AI Safety is Dropping the Ball on Clown Attacks by 22 Oct 2023 20:09 UTC; 65 points) (
- Information warfare historically revolved around human conduits by 28 Aug 2023 18:54 UTC; 37 points) (
- We are already in a persuasion-transformed world and must take precautions by 4 Nov 2023 15:53 UTC; 36 points) (
- Helpful examples to get a sense of modern automated manipulation by 12 Nov 2023 20:49 UTC; 33 points) (
- 5 Reasons Why Governments/Militaries Already Want AI for Information Warfare by 30 Oct 2023 16:30 UTC; 32 points) (
- Notes on nukes, IR, and AI from “Arsenals of Folly” (and other books) by 4 Sep 2023 19:02 UTC; 20 points) (EA Forum;
- 20 Nov 2023 18:10 UTC; 17 points) 's comment on OpenAI Staff (including Sutskever) Threaten to Quit Unless Board Resigns by (
- Sensor Exposure can Compromise the Human Brain in the 2020s by 26 Oct 2023 3:31 UTC; 17 points) (
- Notes on nukes, IR, and AI from “Arsenals of Folly” (and other books) by 4 Sep 2023 19:02 UTC; 11 points) (
- 6 Nov 2023 8:31 UTC; 9 points) 's comment on We are already in a persuasion-transformed world and must take precautions by (
- 13 Jan 2024 19:01 UTC; 6 points) 's comment on (4 min read) An intuitive explanation of the AI influence situation by (
- 5 Reasons Why Governments/Militaries Already Want AI for Information Warfare by 12 Nov 2023 18:24 UTC; 5 points) (EA Forum;
- 2 Dec 2023 16:58 UTC; 5 points) 's comment on Out-of-distribution Bioattacks by (EA Forum;
- 16 Sep 2023 1:26 UTC; 5 points) 's comment on Closing Notes on Nonlinear Investigation by (
- 21 Sep 2023 13:14 UTC; 4 points) 's comment on richard_ngo’s Quick takes by (EA Forum;
- 6 Nov 2023 3:32 UTC; 4 points) 's comment on On Overhangs and Technological Change by (
- 2 Dec 2023 17:10 UTC; 3 points) 's comment on Out-of-distribution Bioattacks by (
- 17 Sep 2023 13:34 UTC; 2 points) 's comment on Closing Notes on Nonlinear Investigation by (
- 15 Sep 2023 16:52 UTC; 2 points) 's comment on Is AI Safety dropping the ball on privacy? by (
- 15 Sep 2023 17:03 UTC; 2 points) 's comment on Is AI Safety dropping the ball on privacy? by (
- 30 Sep 2023 15:40 UTC; 2 points) 's comment on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by (
- 17 Oct 2023 2:35 UTC; 2 points) 's comment on Pascal’s Mugging: The Word Wars by (
- 28 Oct 2023 21:41 UTC; 2 points) 's comment on AI Safety is Dropping the Ball on Clown Attacks by (
- We are already in a persuasion-transformed world and must take precautions by 4 Nov 2023 15:53 UTC; 1 point) (EA Forum;
- 26 Sep 2023 4:02 UTC; 0 points) 's comment on Amazon to invest up to $4 billion in Anthropic by (EA Forum;
- AI Safety is Dropping the Ball on Clown Attacks by 21 Oct 2023 23:15 UTC; -17 points) (EA Forum;
The way I would put it is: there is a wide range of competence and organizational efficacy in any government, and in most of the ways in which you could slice up any government. They include many of the most and least competent people in the world. And governments have access to extraordinary levels of resources and powers. The richest billionaire in the world can scarcely spend annually what an obscure city you’ve never heard of might spend annually on healthcare or pensions.
But governments (and all large organizations, such as corporations) suffer from an extreme deficit of coordination and executive function (pun very intended). Highly-functional organizations cannot replicate themselves, cannot grow without losing functionality, cannot evolve etc. There are permanent productivity differences in businesses doing the exact same thing, because the net selection is too weak—organizations are barely able to maintain their status quo. Oversight and accountability are difficult, there are only a few competent managers or troubleshooters to go around, only a few isolated islands of competence which can be trusted, there’s always a crisis elsewhere as a distraction… A large organization cannot walk and chew bubblegum; leadership like CEOs exist mostly to (1) decide when to stop walking and start chewing bubblegum instead, and (2) yell at everyone that the goal is now ‘chewing bubblegum’ for as long as it takes for the vast lumbering giant to halt. They are much like humans in having a spotlight of conscious attention and a working memory with just a few slots, except that instead of the ‘magic number 7±2’, it’s a little more like ‘the not-so-magic number 0–2’; and then everything else carries on autonomously. Attempts to pay attention to more things just leads to accomplishing nothing, or disaster. (Imagine if you had to consciously control your breathing and heart rate and all the other things your autonomous nervous system does! not to mention the 99% of cognition which is inaccessible to consciousness.)
So, large organizations can often accomplish anything they want, but they cannot accomplish everything. They often fail on the meta-level of wanting to accomplish the right thing, and go off and accomplish the wrong things. Whereas cases of successful projects like the Manhattan Project/Project Nobska/Skunkworks/Apollo Project/Operation Warp Speed are cases where the large organization decides to accomplish the right thing: it makes the project genuinely a top priority, assigning the top people to it, and ensuring it is not held hostage to autonomous routine concerns or nice-to-have things like being environmentally-friendly. (The Eye of Sauron turns its baleful gaze away from the broader war and towards any sources of slowdown, burning through them.) When the Manhattan Project needed tons of highly-conductive metal like copper, they didn’t waste time requesting it or even seizing it from other military needs; they simply borrowed tons of silver from Fort Knox. (More balefully, they didn’t give a toss about ‘pollution’ or ‘safety’, and so the cleanup goes on to this day—costing OOMs more than the Manhattan Project itself ever did!)
When people talk about ‘We need a Manhattan Project’, what they are really saying is not ‘we need to spend $ $ $ on this’ (often $ $ $ has already been spent) but that the large-organization should burn 1 slot of attention on that goal, setting a hard endpoint with Consequences™.
When the Eye of Sauron does turn its gaze onto a specific project, the results are shocking.
In an intelligence community context, the American spy satellites like the KH program achieved astonishing things in photography, physics, and rocketry—things like handling ultra-high-resolution photography in space (with its unique problems like disposing of hundreds of gallons of water in space) or scooping up landing satellites in helicopters were just the start. (I was skimming a book the other day which included some hilarious anecdotes—like American spies would go take tourist photos of themselves in places like Red Square just to assist trigonometry for photo analysis.) American presidents obsessed over the daily spy satellite reports, and this helped ensure that the spy satellite footage was worth obsessing over. (Amateurs fear the CIA, but pros fear NRO.)
Or consider the NSA. Specifically, the Snowden leaks: systematic, well-maintained, engineered, comprehensive, extensive, utterly successfully kept secret hacks of everyone. At this point, probably a lot of LW2 readers weren’t ‘around’ for that, but as humdrum as they may seem now—‘oh sure, of course the NSA was spying on and hacking everyone’—the leaks were massively traumatic to the infosec sector because people just couldn’t imagine that basically all the medium-case scenarios were true simultaneously—the NSA really had backdoored your cipher, really was piggybacking on other state actors, really was keeping all the hacks successfully secret by extreme levels of OPSEC which is why hardly anyone had ever seen a NSA hack, really was recording all Internet traffic at some points, really was storing much traffic to crack later with quantum computers, really was intercepting your computer in the mail to add hardware backdoors, etc. They might’ve speculated idly about this or that and everyone knew about a few things like the Dual_EC backdoor, but there is a world of difference between some fun speculation and, say, being a Google datacenter architecture being shown a NSA PowerPoint presentation literally making fun of you for not encrypting your fiberoptic lines & letting the NSA harvest all your data. (It’s a bit like the horror people feel as they go from idle speculation in 201x about ‘hey, what if deep learning just kept going and began approaching AGI’ to playing with GPT-4 and realizing ‘oh my god, it’s actually happening’. They thought they lived in one world and not the other world, until the illusion shattered.) And this was, of course, in part because this let the NSA provide tons of intelligence that made it directly to the President’s daily briefing, which then gave the NSA budget and powers to get more intelligence, and so on, in a virtuous circle for them.
Meanwhile, of course, in that same American intelligence community at the same time, you have many, shall we say, less impressive things happening. (There is a wide range of competence.)
There are not many slots of attention to spend, and I strongly suspect that little attention has been spared for AI until recently. After all, how could the intelligence agencies take it seriously at the orders of their masters, when their masters still do not*, and even most AI people continue to strenuously downplay its importance? Stealing models of stochastic parrots is not a priority for anyone except ornithophiles. (It’s just a fad, it’s not real intelligence, there have been so many false alarms before, it’ll hit the wall soon, it’ll run out of data, I saw some messed-up hands in an image the other day, it’s a distraction from real issues like hardware, and aren’t OA user numbers falling anyway...?)
It’s become fashionable to claim that the NSA and/or CCP has surely already long ago hacked OpenAI (or Anthropic, or DeepMind—the claim is salted according to one’s taste) and stolen this or that (GPT-3, or GPT-4). I doubt these claims, as they are made without any evidence and have explicit ulterior motives for hyping up threats & justifying acceleration (and we’d see more impressive things out of Chinese DL if the CCP really done so). I’ve been hearing for many years now the notion that if the publicly-known DL is X% then there must be a secret government AGI project which is way better and does X+Y%—needless to say, at no point have we ever learned of such a thing having been true. (This also holds for various topics in genetics and a few other disruptive technologies; if one had a nickel for every time some commenter claimed that there was a government Manhattan Project for which there was zero evidence because it was secret, one could almost fund one’s own Manhattan Project.) If there have been serious hacking campaigns or serious ‘AGI Manhattan Projects’, they have probably started relatively recently and may be completely unsuccessful. But most of all, I doubt the idea that the NSA would’ve blasted through OA security, in the way it definitely could if they took off the gloves, because there is no sign that the Eye of Sauron has truly turned to AI and any intelligence agencies begun truly prioritizing AI-related industrial espionage, as opposed to continuing on the usual autonomous autopilot of various low-priority efforts and plucking low-hanging fruit. (Note that even the China chip ban was justified in considerable part by hypersonic missile & advanced aerospace military R&D, which are topics the Eye has been staring at.)
However, just because they have not yet doesn’t mean they won’t at some point. And when they do, the situation will alter radically—and outsiders may have no idea.
* See anything from Xi Jingping lately that indicates he gives a s—t about AI rather than doubling down on his reign’s classic-Marxist-style monomania of supply-side heavy industry & construction?
What is that book with the fun anecdotes?
Trevor, one pattern I’m noticing is that you have a habit of identifying the limits of technology (here and elsewhere) and then, because you can’t prove otherwise, asserting intelligence agencies possess these capabilities and deploy them effectively toward national security problems without evidence. It’s akin to arguing from first principles that humans have nuclear fusion in 1955 because we’ve had a theory of quantum mechanics for some time now.
In reality, it seems unlikely to me that the government’s ability to analyze massive data for trends and manipulate large groups of people via the internet runs ahead of digital advertising (and in fact, it is common knowledge at this point that government uses the advertising industry for large portions of its analysis tasks), because digital advertising is already attempting to solve similar problems in similar ways, and has access to better human capital and more money than any intelligence agency does. The CIA has unique capabilities, because they’re allowed to break the law in ways Google cannot, but at the same time they face problems, because they’re also incapable of operating in foreign countries overtly at all.
These are some pretty big and broad claims, it’s the core of the argument in this comment, and it seems likely to be subject to the problem I’ve described in this post. It would also be a very harmful mindset if mistaken. Can you go into more detail about this, either here or in a DM? That would have some really big implications for my research, and if true it would save me a lot of time reinventing the wheel.
For those curious about this topic, here’s some resources I’d recommend (I am not a professional, this is not professional advice):
Samo Burja’s core concept writings, especially Great Founder Theory
The Mechanics of Tradecraft sequence.
Any actual well-sourced/well-researched written history of an actual intelligence agency.
Zvi’s posts on simulacra levels
The passage here about Julian Assange.
The compressed view-from-10,000-feet TLDR: Organizations are made of parts, those parts are people and things, game theory’s gonna game theory, dramatic (and/or dramatic-sounding) stuff can result.