5 Reasons Why Governments/​Militaries Already Want AI for Information Warfare

1. Militaries are perhaps the oldest institution on earth to research and exploit human psychology.

2. Information warfare has long hinged on SOTA psychological knowledge and persuasive skill.

3. Information warfare is about winning over high-intelligence elites, not the ignorant masses.

4. Digital information warfare capabilities were already a prominent feature of the US-China conflict by the late 2010s.

5. SOTA psychological research and manipulation capabilities have already started increasing by an order of magnitude every ~4 years.

1. Militaries are perhaps the oldest institution on earth to research and exploit human psychology.

Throughout history, militaries tended to persist or exit the civilizational genepool based on their ability to succeed at Recruitment, morale, and adversarial strategizing.

However, unprecedented features of the 20th century drove militaries to prioritize psychological warfare and information warfare more than ever before, including the prevalence of the False Flag attack, plausible deniability, and the invention of game theory for the purpose of nuclear brinkmanship.

Joseph Nye (Soft Power, 2004, p. 19) argues that these changes made hard military power revolve around successes and failures in information warfare:

...modern communications technology fomented the rise and spread of nationalism, which made it more difficult for empires to rule over socially awakened populations. In the 19th century, Britain ruled over a quarter of the globe with only a fraction of the world’s population. As nationalism grew, colonial rule became too expensive and the British empire collapsed...

In addition to nuclear and communications technology, social changes inside the large democracies also raised the costs of using military power. Postindustrial democracies are focused on welfare rather than glory, and they dislike high casualties… the absence of a prevailing warrior ethic in modern democracies means that the use of force requires an elaborate moral justification to ensure popular support, unless actual survival is at stake. For advanced democracies, war remains possible, but it is much less acceptable than it was a century or even half a century ago. The most powerful states have lost much of their lust to conquer.

The focus on terrorist recruitment and on revolutions in eastern european and middle eastern states (e.g. the Arab Spring) further indicates that modern militaries consider psychological and information warfare as a top priority.

2. Information warfare has long hinged on SOTA psychological knowledge and persuasive skill.

According to Nye (Soft Power, 2004, p. 106), changing trends in propaganda required greater psychological sophistication from governments/​militaries in order to yield the same results as just a few decades ago:

publics have become more wary and sensitized about propaganda. Among editors and opinion leaders, credibility is the crucial resource… Reputation becomes even more important than in the past, and political struggles occur over the creation and destruction of credibility. Governments compete for credibility not only with other governments, but with a broad range of alternatives including news media, corporations, nongovernmental organizations, intergovernmental organizations, and networks of scientific communities.

Politics has become a contest of competitive credibility. The world of traditional power politics is typically about whose military or economy wins… Governments compete with each other and with other organizations to enhance their own credibility and weaken that of opponents. Witness the struggle between Serbia and NATO to frame the interpretation of events in Kosovo in 1999 and Serbia a year later.

3. Information warfare is about winning over high-intelligence elites, not the ignorant masses.

Information warfare has long been about creating human conduits to create a critical mass sufficient to reach key elites and turn them against the government/​military, such as scientists during the Soviet-backed involvement in the Vietnam Antiwar movement.

According to Audra Wolfe in Competing with the Soviets (2013, p 115-119), the Vietnam Antiwar movement introduced psychological factors that decimated the Pentagon’s access to scientists:

Collectively, the student protests, radical critiques, and congressional reforms dismantled the [pro-military research] consensus that had ruled university campuses since the end of World War II. No longer would it be acceptable for universities to fuel their expansion with the help of military funds. With the more sweeping radical criticisms offered by organizations like Science for the People, even those scientists who accepted nonmilitary funds increasingly began to ask what sort of ideological strings came attached to federal largesse. And perhaps the biggest change of all was that for the first time since the atomic scientists movement, scientists felt empowered to offer political criticisms of the relationship of science to national security without repercussions to their careers. Yet, as the scientists would soon find out, defense analysts were no longer so very interested in what they had to say...

Given that protests against the Vietnam War originated on university campuses, campus opposition to the scientific and technical research that undergirded American foreign policy is not surprising. What is perhaps more startling is the speed with which such questioning spread to defense insiders. The expansion of US military involvement in Vietnam challenged even those scientists who had previously supported defense work or had served as advisors to the defense establishment. And as their criticisms grew, the distrust between scientists and those they advised became mutual. For the first time in a generation, national security advisors began to ask whether it might be better to make decisions about science and technology without the input of scientists or engineers.

One of the more telling sites for this split was within the Jasons, a secret collective of physicists who spent a portion of their summers providing advice to the Pentagon. Originally created in 1959 under the umbrella of ARPA, Jason scientists offered independent assessments of military technologies and suggested new technologies that military planners might want to pursue. Unlike most defense advisory groups, Jason chose its own members, and the membership itself decided which problems to investigate after receiving top secret briefings from military leaders. By 1965, Jason’s projects included missile defense, submarine detection and tracking, and schemes to disrupt the Earth’s magnetic field. Although Jason usually offered solutions to intractable problems, it occasionally used its powers to nix scientifically implausible ideas, such as plans to shoot down incoming missiles with powerful lasers...

The Jasons learned the hard way that scientists do not necessarily control the things they have created. When summaries of several of the Jason reports appeared in the New York Times, publication of the Pentagon Papers in 1971, the Jasons became a symbol of all that had gone wrong in the relationship between academic scientists and the military. A 1972 booklet published by [an] antiwar group… recounted the most damning of the Jason’s projects and identified several Jasons by name. Protests followed, including a three-day siege of Columbia University’s Pupin Laboratories. Several Jasons resigned, while others simply noted that their function all along had been to advise the military. In the face of personal threats, those Jasons who remained committed to advising the military reiterated their right to advise their own government. They returned to work with a renewed dedication to secrecy, not so much chastened as disillusioned with the tactics of dissent.

Another group, PSAC, was even driven to get into an attritious public conflict that further weakened both the Johnson and the Nixon administration’s access to scientists (page 117-119).

4. Digital information warfare capabilities were already a prominent feature of the US-China conflict by the late 2010s.

I don’t like engaging in China bashing, since the regime is much more defense-oriented than most westerners think, and is mainly just stuck in a bad low-trust equilibria with American intelligence agencies. Their use of AI for authoritarian control is probably just copying and retooling capabilities originally invented in the US.

However, Chinese attempts to expand global influence are also relevant here (their side of the story is that they are counteracting similar US-backed systems, but authoritarian states always benefit from claims like that, even when not true).

Robert Sutter (US-China Relations, Perilous Past, Uncertain Present, 4th edition, 2022, page 216) covers the current state of China’s attempts at global information warfare capabilities:

Since Chinese party control of key Chinese industries and economic enterprises grew under Xi Jinping [since 2012], and China’s national intelligence law was judged to require Chinese companies to cooperate with Chinese government requests for information, the expansion of China’s digital communications equipment and infrastructure [throughout Asia, Europe, and the US] meant that Chinese digital infrastructure deployed abroad could be used by Chinese authorities for purposes of intelligence, influence operations, and other means advantageous to the state.

American concern over China’s 5G development was at the heart of the Trump Administration’s restrictions targeting Huawei, China’s leading company developing and deploying 5G and related technology and infrastructure abroad. The US government worked with considerable success to persuade intelligence officials among US allies of the dangers to security posed by the communications equipment provided by Huawei or other Chinese companies.

It’s important to note that the risk of chip firmware backdoors and stockpiles of OS exploits are also an important factor for foreign-produced electronic devices and infrastructure, although I currently only have information about NSA stockpiles of OS exploits.

(Sutter, page 290)

PRC methods of social and political control evolved to include the widespread use of sophisticated surveillance and big-data technologies. Increasingly, Chinese companies were exporting data and surveillance technologies around the world. In April 2019, the Australian Strategic Policy Institute, and Australian-based nonpartisian think tank, showed Chinese firms involved in installing 5G networks in 34 four countries and deploying so-called safe cities surveillance technologies in 46 countries. In october 2018, Freedom House reported 38 countries in which Chinese companies had installed internet and mobile networking equipment, 18 countries that had deployed intelligent monitoring systems [sic] and facial recognition developed by Chinese companies, and 36 countries in which media elites and government officials had travelled to China for trainings on new media or information management.

(Sutter, page 295)

The agreements enabled profitable Chinese infrastructure development and deepened Chinese influence while serving the power and personal wants of the authoritarian and/​or corrupt foreign leaders. This symbiosis of Chinese-foreign government interests represented a strong asset in China’s growing international influence as the world was full of such regimes.

Added to this bond was Chinese provision of communications and surveillance systems that assisted the foreign leaders to track and suppress opponents. Related was robust Chinese interchange with media outlets in various states. Those outlets pursued news coverage that was positive concerning the government leadership and China… Communications and surveillance systems also assisted Chinese intelligence collection and manipulation of opinion in the country.

The array of foreign governments influenced in these ways included Venezuela and Ecuador in Latin America; Serbia, Montenegro, and at times arguably Italy and Greece in Europe; Djibouti and Zambia in Africa; the Maldives and Sri Lanka in South Asia; and Cambodia, Laos, Malaysia, Myanmar, and the Philippines in Southeast Asia. Many authoritarian governments in the Middle East and Central Asia were seen as inclined to work closely with China along these lines.

5. SOTA psychological research and manipulation capabilities have already started increasing by an order of magnitude every ~4 years.

This causes major militaries to shift resources to offensive and defensive information warfare.

This already started more than a decade ago. By now, the big 5 tech companies (Amazon, Apple, Google, Facebook, Microsoft) should have more than enough human behavior data, and the capability to process and interpret it e.g via AI and psychology researchers, to each unilaterally pioneer the field of human persuasion research, particularly in the domain of impression/​vibe manipulation.

Via the social media scrolling data collection paradigm, they have access to billions of detailed case studies about the precise circumstances under which a human formed an impression about a topic.

Unfortunately, the movement of scrolling past a piece of information on a social media news feed with a mouse wheel or touch screen will generate at least one curve, and the trillions of those curves are outputted each day by billions of people. These curves are linear algebra, the perfect shape to plug into ML.

Thumbs 'travel two marathons a year' scrolling through social media | Daily  Mail Online

The social media news feed is well-suited to control for variables, and analyze the effectiveness of various combinations/​orders of posts on people with various traits.

They probably have intense capabilities to steer people’s thinking and behavior in measurable directions, because that capability is easy to notice and easy to do; if the direction is measurable, then just optimize for whatever tended to cause people with similar traits to move in that direction. That is the kind of research capabilities that predictive analytics facilitates. AI is not even required, although it dramatically increases the capabilities.

Most members of the AI safety community are highly distinct from the vast majority of the people in the data set, but we are still vulnerable to exploits like clown attacks that work on virtually any human, and the social media paradigm allows hackers to continuously try things with low success rates until they find things that work (the multi-armed bandit algorithm).

People getting their minds hacked is obviously something that could happen during slow takeoff and before AGI. In fact, it is so obvious that it is even plausible that the task of human manipulation could be automated and scaled by systems existing today, which offer orders of magnitude more powerful large-scale human behavior analysis, research, and continuous experimentation than the one-shot n = 100-1000 experiments that dominated the 20th century paradigm of academic psychology research.

The 20th century was radically altered by the discovery of psychology, a science of the human mind, and its exploitation. The human brain, and controlling/​steering it, is a science; and if the first generation of empiricism (20th century academic psychology) largely failed to acquire spectacular capabilities, that doesn’t mean the next generations would, especially after reaching the point where empirical research capabilities start increasing by an order of magnitude every ~4 years (which already started more than a decade ago). This tech is bottlenecked on data, algorithmic progress, and human talent pools (for training models, and labeling correlations and psychological outcomes)

The ability to try things until something works on a target, combined with the ability to quantify what kinds of things tended to work on people with specific combinations of traits, combined with billions of case studies, results in the natural generation of powerful manipulation engines. Human organizations are incentivized to pioneer these capabilities by default (upon discovering and demonstrating them), since controlling other people (including getting people out of the way) is instrumental to almost any goal.

If they notice ways to steer people towards buying specific products, or to feel a wide variety of compulsions to avoid quitting the platform, and to prevent/​counteract other platforms multi-armed bandit algorithms from automatically exploiting strategies (e.g. combinations of posts) to plunder their users, then you can naturally assume that they’ve noticed their capabilities to steer people in a wide variety of other directions as well. The problem is that major governments and militaries are overwhelmingly incentivised and well-positioned to exploit those capabilities for offensive and defensive information warfare.

Intelligence agencies like the CIA depict themselves as mere information gathering institutions loyal to the president, just like how the CDC depicts itself as a responsible authority. In reality, the empirical data from the Cold War and the War on Terror make it very clear that intelligence agencies are actually Bureaucracies that Conquer; from infiltrating regimes and overthrowing unfriendly regimes, to targeting, infiltrating, and intimidating domestic elites into submission.

They are bottlenecked by competence, which is difficult to measure due to a lack of transparency at higher levels, but revolving-door employment easily allows them to source flexible talent from the talent pools of the big 5 tech companies; this practice is endangered by information warfare itself, further driving interest in information warfare superiority.

Access to tech company talent pools also determine the capability of intelligence agencies to use OS exploits and chip firmware exploits needed to access the sensors in the devices of almost any American, not just the majority of Americans that leave various sensor permissions on, which allows even greater access to the psychological research needed to compromise critical elites such as the AI safety community.

However, many SOTA psychological influence techniques work on humans-in-general, regardless of how distinct they are from the average person in the data, such as clown attacks.

Major banks use AI to research human behavior data; particularly the spending/​saving ratio that heavily influences the recession mitigation/​exacerbation. This research is obviously dual-use by default, even if transaction data is a far weaker window into the human mind than social media scrolling data (Amazon and Chinese e-commerce firms have many degrees of freedom to research and experiment with both).

These capabilities were first sought as a macroeconomic tool by the Reagan and Thatcher administrations in the 1980s, but only now are conditions ripe for recessions to be mitigated (or weaponized) by governments and large firms utilizing SOTA human influence capabilities. In the US-China context, it’s well known that the most probable win condition is economic collapse/​weakness experienced by the opposing side.

These reasons are why it’s reasonable to assume that, by default, AI gets covertly used for large-scale human manipulation research. It can’t be done without millions of subjects because AI needs large diverse data pools, and those millions of subjects won’t spend an hour a day in the optimized research environment (social media news feeds) if they are aware of the risks.

Furthermore, if you look at a list of NGOs infiltrated by the CIA, companies with FBI informants, or communities that received special attention from the NSA, it should be obvious in hindsight that the AI safety community is extremely similar to the type of geopolitically significant community that tended to get targeted in the past.

As a result, the AI safety community must reduce its massive attack surface and become hardened against SOTA AI-powered manipulation. Surviving the 2020s does not have much intrinsic value, but it is instrumentally convergent to persist and continue alignment research.

It’s folly to take the happy-go-lucky world that we experienced throughout most of our lives, and imagining the 2020s as more of that; humans are gaining capabilities, and the expectation of a happy-go-lucky 2020s already hasn’t held up.

Crossposted to EA Forum (0 points, 0 comments)