I have been hacked, and some (selected) files have been deliberately deleted—this file/post was part of it. I found suspicious software on my computer; I made screenshots—these screenshots were deleted together with several files from my book “2028 - Hacker-AI and Cyberwar 2.0”. I could recover most from my backup. So far, I have hesitated to post sensational warnings on using smartphones in malware-based cyberwars designed to decapitate governments. Still, I wrote this paper out of concern that this could happen – and cheap marketing talk does not and will not change that. I don’t know who hacked me, but I assume they got this post. It is spicy (and scary). It is published here to protect my family and myself. I have beefed up my security. Certainly, these measures are no match for what I believe attackers could do. Therefore, I established measures to guarantee that my research will survive and be published before it is officially published as a book. I changed my password credentials, but I am under no illusion that this is enough to protect this publication. I have asked friends to save and print this post on their local systems (and not to sign into this site while doing this). I will certainly repost it if an impersonator deletes (or modifies) it with my credentials. (Lesswrong doesn’t have 2FA – and I am not suggesting that because I assume it is insufficient anyway).
Cyberwar is currently fought in two main forms: (1) propaganda, disinformation, and weaponization of social media and (2) disruption of services like electricity, communication, and Internet (information) services. Other considered forms of cyberwar are aimed at what wars usually do: create indiscriminate destruction for innocent casualties, i.e., increasing the cost of war. In the last decades, (kinetic) weapons got surgical/limited in their damage-related applications. Overall, waging war is very expensive – in human lives, economically/socially, and environmentally. War also has costly, unpredictable side effects, which deters nations from using military means to pursue political goals. This fundamental calculus will change with Hacker-AI and Cyberwar 2.0+. Hacker AI can make cyberwar a low-risk, low-cost decision. Cyberwar can become a compelling choice due to significant first-strike advantages and fabricated misdirections/accusations against innocent parties.
1. Hacker-AI
Hacker-AI is a hypothesized application of AI in low-level computer system hacking that significantly speeds up the development of malware solutions. It changes (cyber-) war’s cost and outcome via using, not destroying, enemies’ resources. Hacking was (and for many is) a time and labor-intensive activity with uncertain outcomes. So far, it was too unpredictable or insufficiently targeted or controllable for comprehensive military use at scale.
However, hacking will become much faster and better with the rise of AI methods. Its use/application can be simple, scalable, pervasive, and surgical. Because complexity is the enemy of security and a single vulnerability is enough to diminish all security measures, making software invulnerable to malware is likely impossible. We assume/hypothesize that attackers have (currently) a significant/sustainable advantage over defenders.
Hacker-AI is a foundational capability in accelerating the detection of software or device vulnerabilities. Once Hacker AI gives, without proper authentication, hackers elevated access (sysadmin) rights, low-level reverse code engineering (RCE) can be applied to steal crypto keys or utilize device features covertly. With Hacker-AI doing RCE, every app can be turned into something it was not before: a spy, saboteur, or traitor.
Furthermore, Hacker-AI can make malware detection much more difficult by leaving fewer/false data traces (for misdirection) or actively evading detection by modifying lover-level Operating System (OS) features. Hiding apps near-perfectly will require OS modifications. These changes are feasible – even as covert updates. Therefore, Hacker-AI can turn apps into digital ghosts and prevent apps from being removed from occupied systems. The first Hacker-AI deploying this irremovability feature broadly and covertly could become the only one.
Hacker-AI capabilities are utilized by additional software designed to wage Cyberwar 2.0, in which humans would still activate/control many attack details. In Cyberwar 3.0, it is hypothesized that a high degree of automation helps attackers make detailed decisions goal-oriented via reward, closed feedback loops, or other AI methods. Cyberwar 3.0 is like an automated computer game played by a handful people controlling operational goals. In the aftermath of a Cyberwar, laws and IT capabilities could be linked to force compliance to besieged countries (via AI-enabled mass surveillance) while suppressing resistance or dissent in the population.
2. Hacker-AI in Cyberwar 2.0+
Hacker-AI can be used in cybercrime, but that is an application used (in this context) for misdirection. Here we present an underlying framework of features/capabilities that is facilitating Cyberwar 2.0+ activities:
(1) Surveillance or collecting reliable, comprehensive intelligence on all citizens/organizations without the attacked country detecting that. This data gathering (audio/text/location) could create an accurate, comprehensive, cross-referenced model of roles, responsibilities, and motivations of everyone relevant in the attacked society. Malware could even look for resumes and individual pressure points used for intimidation/coercion or later enforcing societal compliance. Surveillance uses public power supply, internet, and many untouched freedoms (speech/protest) for attacker’s advantage. Malware-/Smartphone-based surveillance (like Pegasus/NSO-Group) could make public surveillance infrastructure (like CCTV) redundant, but it’s still vulnerable to Hacker-AI.
(2) Selective access denial (denial of services for specific people) – while keeping communication, internet, and power supply fully available for surveillance. Access denial does not need to be perceived as malicious targeted actions or a demonstration of power. Instead, access denial could come more innocently via unreliable services, temporary outages, or failures, all caused by devices’ hidden software/malware features (unbeknown to users).
(3) Directly intimidating people via bypassing physical/logistical barriers in delivering personalized threats. A threatening chat with AI bots is likely more effective than one from a real person. Threats provided with talking AI (bots) will give victims the impression that they have less chances to remain undetected if threats are ignored. Tracking victims and using their compliance data makes it more difficult for citizens to resist automated recruitment as spies or collaborators. Sooner or later, people are scared into collaboration and acceptance.
(4) Realtime Deep-Fakes, redefining truth – for audio/video calls and news or past events. Digital data can be modified in multiple places to support a narrative that is not based in reality or facts. Also, the language of laws, rules, and regulations could be modified (when digital) to serve an assailant’s agenda if they are not within the working memory of people doing operations regularly. Also, the hiring, firing, promotion, or demotion could be done with automated (faked) messages. It will be very difficult to determine what is true and what are lies.
(5) Reduction of costly consequences of a typical war. The goal is to prevent costly damages, including sabotage and possible penalties from economic sanctions. In Cyberwar 2.0+, no physical damage/ destruction is required; not even ransomware or cyber vandalism is necessary. With less destruction and fewer preventable costs, no costly disruption within the occupied country or unmitigated sanctions devalue the spoil of war.
(6) Misdirection on whose Hacker-AI is responsible for waging a cyberwar. There is always a short list of suspects, but proving who is responsible when the stakes are high is (already) extremely difficult, possibly impossible. But malware could be used to point blame at an innocent third party. Dealing with Hacker-AI, we should assume that data traces are left purposefully and intentionally, designed to misdirect digital forensics. “Cui Bono” (who benefits) could narrow the list of state actors, but that is not a prove. Cyber-criminals could always be blamed until time runs out and resistance or retaliation becomes futile.
3a. Hacker-AI adapted to Cyberwar Requirements
Hacker-AI will have centralized and decentralized features. Hacks/exploits executed on client-sided systems are probably developed in centralized computer facilities with data (i.e., apps) covertly uploaded from distributed client instances. Hacker-AI code on occupied devices varies; it doesn’t need sophisticated AI features, only smart/modifiable low-level code that hides additional code executions. This deployed malware could consist of a small core that can be universally used. It could call tools undetected by the main OS, incl. tools from the OS.
The foundational client-sided malware operates likely outside/below existing (OS) permissions. It would have three main tasks: (i) hide activities from the main OS, (ii) hide/change its code/configuration against advanced detection/forensics, and (iii) receive/request covert instructions from the outside on what to do. A fourth feature might be: connecting with occupied neighbors (sharing data) or (exploring and) spreading to unoccupied devices.
The first two features are like: never getting caught – or if one instance might be caught and analyzed, i.e., revealed and reversed engineered, all relevant (or endangered) instances on all devices are changed to stay undetected/unassailable on all other (non-probed) systems. Detectable patterns or the reuse of (cloned) code makes the detection of digital ghosts easier. Creating diversity would limit the damage from successful detections.
Still, detectability is also a tool in cyberwar used for misdirection and plausible deniability. So, if Hacker-AI detects detection tools, it could feed them misinformation accusing other malware/parties of being the culprit.
The third feature allows the attacker to operate undercover and to do almost anything on occupied devices. Furthermore, if unmodified firewalls prevent undetected data exchange with remote systems, Hacker-AI could find other systems (like nearby phones) to facilitate unsupervised contact with the outside. Transfer of non-OS code among client-sided malware could also be done very efficiently with decentralized peer-to-peer features. The assumption that users must do something wrong (like clicking a link, etc.) to be infected with malware is wrong. Finding new vulnerabilities that could be used click-free is what Hacker-AI would be used for.
Additionally, using older phones (with non-smartphone features), like burner phones, may reduce the problem of comprehensive audio surveillance but not the problem with selected malware-triggered service denial or intimidation. Even 20 years old mobile phones had operating systems that could be modified and host malware.
3b. Cyberwar Measures as Consequence of Hacker-AI
The most distinct difference to existing cyberwar scenarios is that Cyberwar 2.0+ is an interactive data operation that uses collected information actively against its adversary and avoids costly destruction. Disabling defenders’ capabilities can be done using malware less obviously.
Contrary to the mainstream, we assume that power supply, internet, and communication availability are more advantageous for attackers than defenders. We assume that every destruction is counterproductive for the attack and the aftermath. However, the goal of cyberwars is solely defined politically or economically. If eliminating an economic competitor is the goal, then destruction might still be the tool of choice.
A reliable, targeted denial of access could effectively decapitate governments and society. Deep fakes can fill the gap and contribute to confusion. Dominating the information space can also be used to built-up a new executive. Required Cyberwar 2.0+ tools must have a comprehensive feature set or experienced operators. Only fully committed state actors with governmental resources could use their institutional experience using laws, bureaucracies, and the security apparatus to define concrete enough operational goals. With surveillance, Cyberwar 2.0+ could quickly become a legal/police operation to fortify gains, establish self-censoring behavior among citizens and avoid destruction from sabotage or terrorism.
As soon as people know or assume surveillance happens and Hacker-AI is helping authorities in collecting data from their private discussions, most citizens will fall in line via (social) compliance. The government’s known capabilities of doing mass surveillance and that phones or internet-capable devices are used to spy on people’s locations, meetings, private conversations, or social media activities will change user behavior significantly. This societal change can be amplified with social scoring to punish non-compliance via targeted inconvenience/disadvantages for detected dissent. Self-censoring behavior will quickly silence most voices of resistance or mild forms of civil disobedience.
As a basic goal, Cyberwar 2.0+ must isolate and disarm the opposing military or security (police) forces loyal to the decapitated government or society. Once most IT devices are occupied by the assailant Hacker-AI, there is no hiding from covert or overt surveillance; arrests can happen anywhere, anytime. Security could know in seconds where any of its citizens reside. It would likely take hours or days to identify and eliminate the resistance. Dictatorships have shown (e.g., via actions against the Uyghurs) how possible acts of sabotage are proactively being suppressed with reeducation camps.
In Cyberwar 2.0+ situations, adversaries communicating with citizens (not necessarily via phone calls) could intimidate people and disrupt trust in the previous government. A silenced or decapitated executive, prevented from giving orders, is effectively replaced by the attacker, particularly if they could give (faked) orders. The speed of reshaping bureaucracy and leadership within the security apparatus would likely depend on the quality of surveillance and preparation. Also, fear, hysteria, and misinformation (confusion, accusations, or fabricated evidence) among people combined with a climate of anticipated invasion could help arrest innocent bureaucrats, politicians, and security leadership executed by existing security forces, as planned by the attacker.
Additionally, small numbers of traitors (e.g., human informants) could control large systems/ organizations via fear and self-enforced compliance to demands coming from AI bots. These bots don’t leave evidence or witnesses; they need to show their presence only when they intimidate. For the rest, life would continue as if nothing had changed. Even journalists are either intimidated or deceived into using misdirection or disinformation.
There is little doubt that Hacker-AI can be applied in many more ways than mentioned in these sections.
3c. Deniability and Misdirection
Cyberwar 2.0+ could be undeclared. Suspicious events can be blamed on inherent instabilities in democracies or as a result of cybercrime. All Hacker-AI activities are fully deniable because data traces are likely intentional misdirections. Even the existence of Hacker-AI will likely be disputed.
With existing tools, defenders are disadvantaged because cyberwar activities are initially covert. At the same time, attackers can follow very complex, detailed plans broken down (and updateable in real-time) to operational teams with (immediate) feedback on the success/failure of each action step. Hacker-AI-supported activities/events might be seen as Hacker-AI interventions/ activities in hindsight or info on them requires whistleblowers.
The deception of defenders, i.e., misdirection, is part of a plan. Digital forensics is checking data traces or reverse engineering finds patterns in tool use or other giveaways; these findings are used to attribute attack software to a certain group of hackers. Reusing code snippets, a combination of compiler tools, and remnants of the used programming language or character set could unintentionally reveal information about the attacker.
Hacker-AI could reanalyze developed tools/exploits and create variations that point to different culprits. Besides blaming another country, intentionally creating data traces could point to rogue hacker groups or cyber-criminals designed to deflect responsibilities. Attacking military supply logistics or sabotaging weapon release or control systems could be considered an act of war – pointing to others is necessary and potentially effective.
If Hacker-AI activities are detected on servers, then there is the chance that these tools could be neutralized by honey-pots and turned into double agents to misinform attackers. But honey-pots on too many systems are impractical; they lose their value as tools or sources on which adversaries depend for their decision-making.
The final transition from covert cyberwar operations to an overt full takeover is under the full control of an attacker. However, Hacker-AI and cyberwar tools will likely be kept secret even after a full victory.
3d. Simulation of Cyberwar Activities
The fog of war is making warfare and every military action a risky endeavor. Many factors remain unconsidered. Cyberwar actions could be made interactive on a small feedback time scale; other steps are uncontrollable because they have an air gap that prevents direct interactions. However, simulations could test and study details and outcomes; capabilities could show how they perform in different circumstances. Simulations minimize risk and costs, while operational tools can be optimized and enhanced with automated follow-up actions.
A Cyberwar 2.0 could follow a concrete/detailed plan, potentially automatically adapted or optimized. Provided tools ensure that (most) cyberwar actions are controlled by humans. The software will track success and adjusts to failures (automatically). Instead of destroying capabilities permanently, temporary/remotely-controllable sabotage is prepared and tested ahead of its activation.
Most importantly, realistic simulations are based on actual/real-time info on geography, population details, asset distribution/deployment, possible adversarial threats, goals, etc.
Additionally, the impact of sanctions on the economy could be a topic of simulation to detect vulnerabilities proactively. Simulations can be used to determine how preventative measures (i.e., ahead of an attack) could reduce sanction’s impact on the assailant. Then Hacker-AI can be used against key-supply companies to gain the know-how to prevent sanction-triggered disruption of business continuity. This proactive approach to sanctions (targeted spying) is another net-positive contribution of Hacker-AI and Cyberwar 2.0+ to the assailant’s and the targeted country’s economy.
4. Cost of War: Conventional vs. Cyberwar
Military planners focus on destruction rather than the covert utilization of adversarial resources; some even argue that resources should be destroyed before they are in the hand of the enemy. Destroying capabilities give defenders no time to prevent or mitigate damage. Besides the loss of life, destruction creates sustainable disruption and high costs of loss, replacement, or extended time for repair. Conventional wars are extremely expensive. Still, the attacker or prospective winner is not interested in dealing with damages after winning. The losing/victim side suffering from (permanent/physical) damages is often reinforced in their resolve to continue their fight. It’s much better for both sides if essential resources/capabilities are not destroyed permanently.
Destruction is based on a fire-and-forget mentality; it is a synonym for war. Cyberwar is about to change that.
Invisible (software) failures have different effects. In peace-time, coincidence, crime, or incompetence/greed of operators could be blamed for service outages. Software problems don’t sound permanent. They create the hope that things can be quickly fixed (why not use backups), triggering often unrealistic expectations. However, backups have the same problem – a fix requires an in-depth understanding of the attack. Still, disinformation or active suppression of more accurate news during cyberwar could support attackers’ interests and narratives. Malware is more like injuries that require continuous attention; it is more like anti-personal mines directed to create extended havoc around injured persons or their surroundings than death or irreparable (material) destruction.
But in the big picture, today’s war costs much more than damages from destruction; sanctions are designed to punish aggressors economically. Predicting and preparing for sanctions via improving data transparency on (real) current/future vulnerabilities could mitigate aggressor’s concerns about uncertainties or surprises from unknown consequences. Additionally, systematic spying on suppliers (with uncertain/risky dependencies) can reduce the transition time to greater independence, thereby lessening deterrence from third-party responses. In occupied countries, commerce (particularly manufacturing) should ideally not be disrupted, as markets should be discouraged from seeking replacement or creating opportunities for new suppliers or manufacturers.
The main problem with Cyberwar 2.0+ is that it significantly decreases the cost of war for attackers. Cyberwar 2.0 has predictably a high net-positive outcome. Global, Hacker-AI-based spy-effort before and during sanctions and reduction in destruction/sabotage can significantly reduce disadvantageous consequences of wars.
The annexation of Taiwan (ROC) could become the first example of Cyberwar 2.0+. With technical progress simplifying the development/deployment of surveillance capabilities, countries like Russia or North Korea could also build offensive Cyberwar 2.0 capabilities. Although developed countries are vulnerable, they could be blindsided in supplying rogue nations or even criminals with advanced cyber tools/capabilities.
For example, PROC could attack ROC without declaring war, denying any hostile involvement. But then, without showing any obvious signs of imminent collapse, ROC’s sovereignty could end effectively after some decisive covert cyberwar steps incl. the digital decapitation of the government and society. Most citizens or news media would be oblivious to what just happened. Some people in the administration’s middle or even lower tier would be intimidated into becoming (silent) collaborators. Military or cyber actions are more likely distractions or misdirections. War hysteria of a pending invasion and confusion is used to destabilize a country. Collaborators (and internal enemies of the existing order) are identified; they could silently take over operational key positions via faked orders, while higher echelons of the government are technically incapacitated and silenced.
Also, most militaries’ (software-based) weapons could be (temporarily) sabotaged/deactivated. Non-electronic ammunition is shipped to places where it can’t be used. Businesses (operated by PROC intelligence services) could be invited (via planted collaborators) to send trained “experts” that reinforce (legal/ technical) ties to PROC. Then, it could be expected that no constraints are set on AI-based public surveillance in case of a hostile takeover. After ROC’s (cyber-) decapitation, USA’s influence is quickly neutralized.
Moreover, there is no legal basis for the USA to be involved in ROC’s internal affairs (certainly not militarily). Deterring PROC would require ROC to destroy its own country (via sabotage) to increase PROC’s costs of waging that war. However, sabotage can be suppressed by mass arrests and large reeducation camps for people accused of dissent (like the Uyghurs).
The USA changed its nuclear posture in 2018 by including cyberattacks as events in which they might respond nuclear. However, do nuclear powers already know what Cyberwar 2.0+ activities are below the threshold of nuclear retaliation? The problem: Hacker-AI capabilities are difficult to determine/detect. Hacker-AI might have a flexible/adaptable framework that facilitates any action. Additionally, its scale of deployment is unknown until it is (potentially) too late, in which case retaliation capabilities might be affected or even effectively sabotaged and neutralized. The speed of a cyberwar attack may not give targets any option to respond; this war could take 3 minutes or 3 seconds – nobody can know today.
A successful Cyberwar 2.0 would probably send shockwaves through the world. If ROC could be overtaken, how can the USA or some other nations protect their sovereignty? Deniability and misdirection are essential. Without proof, an uncontrolled, fear-triggered escalation close to nuclear war is conceivable, but nuclear retaliation without proof seems unlikely. Instead, a massive mobilization of technical talents is likely initiated; how fast or successful we can react depends on many unknowns.
What are superpowers doing if rogue nations share Hacker-AI-generated malware (capable of stealing crypto-keys) with gangs of international criminals or terrorists to undermine the trust in eCommerce or online banking?
5. Countermeasures
Once Hacker-AI is available, additional technologies using its data for a Cyberwar 2.0+ are not a technical barrier anymore. Mass surveillance technologies are routinely deployed in PROC and other countries to prevent terrorism – doing what Pegasus is already doing at scale is feasible. Also, extending remote control over surveillance capabilities into different countries is a quickly solvable problem. Firewalls could be blindsided and sabotaged covertly (If humans could do that, then Hacker AI as well). Developing tools to interface data provided by Hacker-AI or tools to intimidate people directly via AI-Bots is not a feature that could be prevented. With the government’s consent, there are no restrictions on how invasive AI is used to surveil a population or an occupied territory. Developing and deploying countermeasures will depend on understanding its vulnerabilities and on the level of preparation ahead of Cyberwar 2.0. Most likely, the only way to counter Hacker-AI and Cyberwar is to deploy technical guardrails (via soft- and hardware) on as many devices as possible.
Hacker-AI helps malware to be undetectable on all systems. So we must change all systems and provide defense or countermeasures via easy and safely deployable updates or retrofits. Technical measures must prevent methods/code execution provided by Hacker-AI, or they must detect actions taken by Hacker-AI, or both.
Undoubtedly, we could get into an arms race between defenders and attackers. Still, with proactive, preventative, separate, and redundant cybersecurity solutions, we could give defenders an advantage they can defend easily.
Hacker-AI activities can proactively be stopped with a strict separation of regular computation from security-related features. Hacker-AI activities are (always) loaded as executables (incl. scripts) to RAM, but apps can be hash-coded (and uniquely identified). These hashcodes are compared with cached white-/grey- (accepted) or blacklisted (rejected) hashcode values before being accepted in RAM. All unknown apps/hashcodes are rejected for execution. This feature could be deployed immediately via software updates. Additional measures are proposed to facilitate redundancy against malicious apps that slipped through the cracks.
The above-proposed measure could (quickly) be introduced via an updated OS solution (utilizing hypervisors). This approach was used in a similar context against rootkits (hooksafe). Later, white-/gray-listing could be provided by separate hardware, making Hacker-AI intrusion/activities even easier to detect. Additional features could make this initial protection method sufficiently safe against Hacker-AI and digital ghosts.
6. Conclusion
Due to the speed of providing high-adaptable malware-based cyber-weapons/solutions used in all types of IT devices, Hacker-AI is a new quality in cyber-warfare. Cyberwar 2.0+ could be considered a free lunch for countries willing to wage war. Detecting Hacker-AI or Cyberwar 2.0+ activities is only achievable with means to stop it. Malware-based (total) user surveillance, denial of service/ access, and direct citizen intimidation resulting from self-inflicted security vulnerabilities can and must be fixed immediately.
Cybersecurity must be enabled to stop nations from waging cyberwars by stopping malware, spyware, and ransomware. We need low-level proactive, preventative, and redundant cybersecurity measures that could be deployed (via updates and later via hardware retrofits) unnoticeable for regular users. However, the combined strength of cybersecurity should provide a security overkill so that we don’t have to fear Hacker-AI or Cyberwar 2.0+ anymore. We should trust digital forensics and not give hackers, governments, or criminals the tools to deceive investigators into discovering what is true.
Deterrence can only work if the costs of war remain extremely high. Cyberwar 2.0+ could fundamentally change the calculus around war. Hacker-AI could make waging war an attractive proposition.
Preventing malware and Hacker-AI from being technically usable is considered the only viable and effective strategy against Cyberwar 2.0+.
Hacker-AI and Cyberwar 2.0+
I have been hacked, and some (selected) files have been deliberately deleted—this file/post was part of it. I found suspicious software on my computer; I made screenshots—these screenshots were deleted together with several files from my book “2028 - Hacker-AI and Cyberwar 2.0”. I could recover most from my backup.
So far, I have hesitated to post sensational warnings on using smartphones in malware-based cyberwars designed to decapitate governments. Still, I wrote this paper out of concern that this could happen – and cheap marketing talk does not and will not change that.
I don’t know who hacked me, but I assume they got this post. It is spicy (and scary). It is published here to protect my family and myself. I have beefed up my security. Certainly, these measures are no match for what I believe attackers could do. Therefore, I established measures to guarantee that my research will survive and be published before it is officially published as a book.
I changed my password credentials, but I am under no illusion that this is enough to protect this publication. I have asked friends to save and print this post on their local systems (and not to sign into this site while doing this). I will certainly repost it if an impersonator deletes (or modifies) it with my credentials. (Lesswrong doesn’t have 2FA – and I am not suggesting that because I assume it is insufficient anyway).
Cyberwar is currently fought in two main forms: (1) propaganda, disinformation, and weaponization of social media and (2) disruption of services like electricity, communication, and Internet (information) services. Other considered forms of cyberwar are aimed at what wars usually do: create indiscriminate destruction for innocent casualties, i.e., increasing the cost of war. In the last decades, (kinetic) weapons got surgical/limited in their damage-related applications. Overall, waging war is very expensive – in human lives, economically/socially, and environmentally. War also has costly, unpredictable side effects, which deters nations from using military means to pursue political goals. This fundamental calculus will change with Hacker-AI and Cyberwar 2.0+. Hacker AI can make cyberwar a low-risk, low-cost decision. Cyberwar can become a compelling choice due to significant first-strike advantages and fabricated misdirections/accusations against innocent parties.
1. Hacker-AI
Hacker-AI is a hypothesized application of AI in low-level computer system hacking that significantly speeds up the development of malware solutions. It changes (cyber-) war’s cost and outcome via using, not destroying, enemies’ resources. Hacking was (and for many is) a time and labor-intensive activity with uncertain outcomes. So far, it was too unpredictable or insufficiently targeted or controllable for comprehensive military use at scale.
Several posts in lesswrong discuss Hacker-AI in more detail. “Hacker-AI and Digital Ghosts – Pre-AGI” (https://www.lesswrong.com/posts/TPNSuNLwss8jre5mr/hacker-ai-and-digital-ghosts-pre-agi) describes the underlying problems leading to Hacker-AI and what features it can or will have. “Improved Security to Prevent Hacker-AI and Digital Ghosts”(https://www.lesswrong.com/posts/SkQEzoJaupFdapsz9/improved-security-to-prevent-hacker-ai-and-digital-ghosts) discusses proposed technical solutions to eliminate problems with Hacker-AI. The post “Hacker-AI – Does it already exist?” (https://www.lesswrong.com/posts/huSvZgwzvfNr49QqD/hacker-ai-does-it-already-exist) is about the question on Hacker-AI could be (or was) created and if it already exists.
However, hacking will become much faster and better with the rise of AI methods. Its use/application can be simple, scalable, pervasive, and surgical. Because complexity is the enemy of security and a single vulnerability is enough to diminish all security measures, making software invulnerable to malware is likely impossible. We assume/hypothesize that attackers have (currently) a significant/sustainable advantage over defenders.
Hacker-AI is a foundational capability in accelerating the detection of software or device vulnerabilities. Once Hacker AI gives, without proper authentication, hackers elevated access (sysadmin) rights, low-level reverse code engineering (RCE) can be applied to steal crypto keys or utilize device features covertly. With Hacker-AI doing RCE, every app can be turned into something it was not before: a spy, saboteur, or traitor.
Furthermore, Hacker-AI can make malware detection much more difficult by leaving fewer/false data traces (for misdirection) or actively evading detection by modifying lover-level Operating System (OS) features. Hiding apps near-perfectly will require OS modifications. These changes are feasible – even as covert updates. Therefore, Hacker-AI can turn apps into digital ghosts and prevent apps from being removed from occupied systems. The first Hacker-AI deploying this irremovability feature broadly and covertly could become the only one.
Hacker-AI capabilities are utilized by additional software designed to wage Cyberwar 2.0, in which humans would still activate/control many attack details. In Cyberwar 3.0, it is hypothesized that a high degree of automation helps attackers make detailed decisions goal-oriented via reward, closed feedback loops, or other AI methods. Cyberwar 3.0 is like an automated computer game played by a handful people controlling operational goals. In the aftermath of a Cyberwar, laws and IT capabilities could be linked to force compliance to besieged countries (via AI-enabled mass surveillance) while suppressing resistance or dissent in the population.
2. Hacker-AI in Cyberwar 2.0+
Hacker-AI can be used in cybercrime, but that is an application used (in this context) for misdirection. Here we present an underlying framework of features/capabilities that is facilitating Cyberwar 2.0+ activities:
(1) Surveillance or collecting reliable, comprehensive intelligence on all citizens/organizations without the attacked country detecting that. This data gathering (audio/text/location) could create an accurate, comprehensive, cross-referenced model of roles, responsibilities, and motivations of everyone relevant in the attacked society. Malware could even look for resumes and individual pressure points used for intimidation/coercion or later enforcing societal compliance. Surveillance uses public power supply, internet, and many untouched freedoms (speech/protest) for attacker’s advantage. Malware-/Smartphone-based surveillance (like Pegasus/NSO-Group) could make public surveillance infrastructure (like CCTV) redundant, but it’s still vulnerable to Hacker-AI.
(2) Selective access denial (denial of services for specific people) – while keeping communication, internet, and power supply fully available for surveillance. Access denial does not need to be perceived as malicious targeted actions or a demonstration of power. Instead, access denial could come more innocently via unreliable services, temporary outages, or failures, all caused by devices’ hidden software/malware features (unbeknown to users).
(3) Directly intimidating people via bypassing physical/logistical barriers in delivering personalized threats. A threatening chat with AI bots is likely more effective than one from a real person. Threats provided with talking AI (bots) will give victims the impression that they have less chances to remain undetected if threats are ignored. Tracking victims and using their compliance data makes it more difficult for citizens to resist automated recruitment as spies or collaborators. Sooner or later, people are scared into collaboration and acceptance.
(4) Realtime Deep-Fakes, redefining truth – for audio/video calls and news or past events. Digital data can be modified in multiple places to support a narrative that is not based in reality or facts. Also, the language of laws, rules, and regulations could be modified (when digital) to serve an assailant’s agenda if they are not within the working memory of people doing operations regularly. Also, the hiring, firing, promotion, or demotion could be done with automated (faked) messages. It will be very difficult to determine what is true and what are lies.
(5) Reduction of costly consequences of a typical war. The goal is to prevent costly damages, including sabotage and possible penalties from economic sanctions. In Cyberwar 2.0+, no physical damage/ destruction is required; not even ransomware or cyber vandalism is necessary. With less destruction and fewer preventable costs, no costly disruption within the occupied country or unmitigated sanctions devalue the spoil of war.
(6) Misdirection on whose Hacker-AI is responsible for waging a cyberwar. There is always a short list of suspects, but proving who is responsible when the stakes are high is (already) extremely difficult, possibly impossible. But malware could be used to point blame at an innocent third party. Dealing with Hacker-AI, we should assume that data traces are left purposefully and intentionally, designed to misdirect digital forensics. “Cui Bono” (who benefits) could narrow the list of state actors, but that is not a prove. Cyber-criminals could always be blamed until time runs out and resistance or retaliation becomes futile.
3a. Hacker-AI adapted to Cyberwar Requirements
Hacker-AI will have centralized and decentralized features. Hacks/exploits executed on client-sided systems are probably developed in centralized computer facilities with data (i.e., apps) covertly uploaded from distributed client instances. Hacker-AI code on occupied devices varies; it doesn’t need sophisticated AI features, only smart/modifiable low-level code that hides additional code executions. This deployed malware could consist of a small core that can be universally used. It could call tools undetected by the main OS, incl. tools from the OS.
The foundational client-sided malware operates likely outside/below existing (OS) permissions. It would have three main tasks: (i) hide activities from the main OS, (ii) hide/change its code/configuration against advanced detection/forensics, and (iii) receive/request covert instructions from the outside on what to do. A fourth feature might be: connecting with occupied neighbors (sharing data) or (exploring and) spreading to unoccupied devices.
The first two features are like: never getting caught – or if one instance might be caught and analyzed, i.e., revealed and reversed engineered, all relevant (or endangered) instances on all devices are changed to stay undetected/unassailable on all other (non-probed) systems. Detectable patterns or the reuse of (cloned) code makes the detection of digital ghosts easier. Creating diversity would limit the damage from successful detections.
Still, detectability is also a tool in cyberwar used for misdirection and plausible deniability. So, if Hacker-AI detects detection tools, it could feed them misinformation accusing other malware/parties of being the culprit.
The third feature allows the attacker to operate undercover and to do almost anything on occupied devices. Furthermore, if unmodified firewalls prevent undetected data exchange with remote systems, Hacker-AI could find other systems (like nearby phones) to facilitate unsupervised contact with the outside. Transfer of non-OS code among client-sided malware could also be done very efficiently with decentralized peer-to-peer features. The assumption that users must do something wrong (like clicking a link, etc.) to be infected with malware is wrong. Finding new vulnerabilities that could be used click-free is what Hacker-AI would be used for.
Additionally, using older phones (with non-smartphone features), like burner phones, may reduce the problem of comprehensive audio surveillance but not the problem with selected malware-triggered service denial or intimidation. Even 20 years old mobile phones had operating systems that could be modified and host malware.
3b. Cyberwar Measures as Consequence of Hacker-AI
The most distinct difference to existing cyberwar scenarios is that Cyberwar 2.0+ is an interactive data operation that uses collected information actively against its adversary and avoids costly destruction. Disabling defenders’ capabilities can be done using malware less obviously.
Contrary to the mainstream, we assume that power supply, internet, and communication availability are more advantageous for attackers than defenders. We assume that every destruction is counterproductive for the attack and the aftermath. However, the goal of cyberwars is solely defined politically or economically. If eliminating an economic competitor is the goal, then destruction might still be the tool of choice.
A reliable, targeted denial of access could effectively decapitate governments and society. Deep fakes can fill the gap and contribute to confusion. Dominating the information space can also be used to built-up a new executive. Required Cyberwar 2.0+ tools must have a comprehensive feature set or experienced operators. Only fully committed state actors with governmental resources could use their institutional experience using laws, bureaucracies, and the security apparatus to define concrete enough operational goals. With surveillance, Cyberwar 2.0+ could quickly become a legal/police operation to fortify gains, establish self-censoring behavior among citizens and avoid destruction from sabotage or terrorism.
As soon as people know or assume surveillance happens and Hacker-AI is helping authorities in collecting data from their private discussions, most citizens will fall in line via (social) compliance. The government’s known capabilities of doing mass surveillance and that phones or internet-capable devices are used to spy on people’s locations, meetings, private conversations, or social media activities will change user behavior significantly. This societal change can be amplified with social scoring to punish non-compliance via targeted inconvenience/disadvantages for detected dissent. Self-censoring behavior will quickly silence most voices of resistance or mild forms of civil disobedience.
As a basic goal, Cyberwar 2.0+ must isolate and disarm the opposing military or security (police) forces loyal to the decapitated government or society. Once most IT devices are occupied by the assailant Hacker-AI, there is no hiding from covert or overt surveillance; arrests can happen anywhere, anytime. Security could know in seconds where any of its citizens reside. It would likely take hours or days to identify and eliminate the resistance. Dictatorships have shown (e.g., via actions against the Uyghurs) how possible acts of sabotage are proactively being suppressed with reeducation camps.
In Cyberwar 2.0+ situations, adversaries communicating with citizens (not necessarily via phone calls) could intimidate people and disrupt trust in the previous government. A silenced or decapitated executive, prevented from giving orders, is effectively replaced by the attacker, particularly if they could give (faked) orders. The speed of reshaping bureaucracy and leadership within the security apparatus would likely depend on the quality of surveillance and preparation. Also, fear, hysteria, and misinformation (confusion, accusations, or fabricated evidence) among people combined with a climate of anticipated invasion could help arrest innocent bureaucrats, politicians, and security leadership executed by existing security forces, as planned by the attacker.
Additionally, small numbers of traitors (e.g., human informants) could control large systems/ organizations via fear and self-enforced compliance to demands coming from AI bots. These bots don’t leave evidence or witnesses; they need to show their presence only when they intimidate. For the rest, life would continue as if nothing had changed. Even journalists are either intimidated or deceived into using misdirection or disinformation.
There is little doubt that Hacker-AI can be applied in many more ways than mentioned in these sections.
3c. Deniability and Misdirection
Cyberwar 2.0+ could be undeclared. Suspicious events can be blamed on inherent instabilities in democracies or as a result of cybercrime. All Hacker-AI activities are fully deniable because data traces are likely intentional misdirections. Even the existence of Hacker-AI will likely be disputed.
With existing tools, defenders are disadvantaged because cyberwar activities are initially covert. At the same time, attackers can follow very complex, detailed plans broken down (and updateable in real-time) to operational teams with (immediate) feedback on the success/failure of each action step. Hacker-AI-supported activities/events might be seen as Hacker-AI interventions/ activities in hindsight or info on them requires whistleblowers.
The deception of defenders, i.e., misdirection, is part of a plan. Digital forensics is checking data traces or reverse engineering finds patterns in tool use or other giveaways; these findings are used to attribute attack software to a certain group of hackers. Reusing code snippets, a combination of compiler tools, and remnants of the used programming language or character set could unintentionally reveal information about the attacker.
Hacker-AI could reanalyze developed tools/exploits and create variations that point to different culprits. Besides blaming another country, intentionally creating data traces could point to rogue hacker groups or cyber-criminals designed to deflect responsibilities. Attacking military supply logistics or sabotaging weapon release or control systems could be considered an act of war – pointing to others is necessary and potentially effective.
If Hacker-AI activities are detected on servers, then there is the chance that these tools could be neutralized by honey-pots and turned into double agents to misinform attackers. But honey-pots on too many systems are impractical; they lose their value as tools or sources on which adversaries depend for their decision-making.
The final transition from covert cyberwar operations to an overt full takeover is under the full control of an attacker. However, Hacker-AI and cyberwar tools will likely be kept secret even after a full victory.
3d. Simulation of Cyberwar Activities
The fog of war is making warfare and every military action a risky endeavor. Many factors remain unconsidered. Cyberwar actions could be made interactive on a small feedback time scale; other steps are uncontrollable because they have an air gap that prevents direct interactions. However, simulations could test and study details and outcomes; capabilities could show how they perform in different circumstances. Simulations minimize risk and costs, while operational tools can be optimized and enhanced with automated follow-up actions.
A Cyberwar 2.0 could follow a concrete/detailed plan, potentially automatically adapted or optimized. Provided tools ensure that (most) cyberwar actions are controlled by humans. The software will track success and adjusts to failures (automatically). Instead of destroying capabilities permanently, temporary/remotely-controllable sabotage is prepared and tested ahead of its activation.
Most importantly, realistic simulations are based on actual/real-time info on geography, population details, asset distribution/deployment, possible adversarial threats, goals, etc.
Additionally, the impact of sanctions on the economy could be a topic of simulation to detect vulnerabilities proactively. Simulations can be used to determine how preventative measures (i.e., ahead of an attack) could reduce sanction’s impact on the assailant. Then Hacker-AI can be used against key-supply companies to gain the know-how to prevent sanction-triggered disruption of business continuity. This proactive approach to sanctions (targeted spying) is another net-positive contribution of Hacker-AI and Cyberwar 2.0+ to the assailant’s and the targeted country’s economy.
4. Cost of War: Conventional vs. Cyberwar
Military planners focus on destruction rather than the covert utilization of adversarial resources; some even argue that resources should be destroyed before they are in the hand of the enemy. Destroying capabilities give defenders no time to prevent or mitigate damage. Besides the loss of life, destruction creates sustainable disruption and high costs of loss, replacement, or extended time for repair. Conventional wars are extremely expensive. Still, the attacker or prospective winner is not interested in dealing with damages after winning. The losing/victim side suffering from (permanent/physical) damages is often reinforced in their resolve to continue their fight. It’s much better for both sides if essential resources/capabilities are not destroyed permanently.
Destruction is based on a fire-and-forget mentality; it is a synonym for war. Cyberwar is about to change that.
Invisible (software) failures have different effects. In peace-time, coincidence, crime, or incompetence/greed of operators could be blamed for service outages. Software problems don’t sound permanent. They create the hope that things can be quickly fixed (why not use backups), triggering often unrealistic expectations. However, backups have the same problem – a fix requires an in-depth understanding of the attack. Still, disinformation or active suppression of more accurate news during cyberwar could support attackers’ interests and narratives. Malware is more like injuries that require continuous attention; it is more like anti-personal mines directed to create extended havoc around injured persons or their surroundings than death or irreparable (material) destruction.
But in the big picture, today’s war costs much more than damages from destruction; sanctions are designed to punish aggressors economically. Predicting and preparing for sanctions via improving data transparency on (real) current/future vulnerabilities could mitigate aggressor’s concerns about uncertainties or surprises from unknown consequences. Additionally, systematic spying on suppliers (with uncertain/risky dependencies) can reduce the transition time to greater independence, thereby lessening deterrence from third-party responses. In occupied countries, commerce (particularly manufacturing) should ideally not be disrupted, as markets should be discouraged from seeking replacement or creating opportunities for new suppliers or manufacturers.
The main problem with Cyberwar 2.0+ is that it significantly decreases the cost of war for attackers. Cyberwar 2.0 has predictably a high net-positive outcome. Global, Hacker-AI-based spy-effort before and during sanctions and reduction in destruction/sabotage can significantly reduce disadvantageous consequences of wars.
The annexation of Taiwan (ROC) could become the first example of Cyberwar 2.0+. With technical progress simplifying the development/deployment of surveillance capabilities, countries like Russia or North Korea could also build offensive Cyberwar 2.0 capabilities. Although developed countries are vulnerable, they could be blindsided in supplying rogue nations or even criminals with advanced cyber tools/capabilities.
For example, PROC could attack ROC without declaring war, denying any hostile involvement. But then, without showing any obvious signs of imminent collapse, ROC’s sovereignty could end effectively after some decisive covert cyberwar steps incl. the digital decapitation of the government and society. Most citizens or news media would be oblivious to what just happened. Some people in the administration’s middle or even lower tier would be intimidated into becoming (silent) collaborators. Military or cyber actions are more likely distractions or misdirections. War hysteria of a pending invasion and confusion is used to destabilize a country. Collaborators (and internal enemies of the existing order) are identified; they could silently take over operational key positions via faked orders, while higher echelons of the government are technically incapacitated and silenced.
Also, most militaries’ (software-based) weapons could be (temporarily) sabotaged/deactivated. Non-electronic ammunition is shipped to places where it can’t be used. Businesses (operated by PROC intelligence services) could be invited (via planted collaborators) to send trained “experts” that reinforce (legal/ technical) ties to PROC. Then, it could be expected that no constraints are set on AI-based public surveillance in case of a hostile takeover. After ROC’s (cyber-) decapitation, USA’s influence is quickly neutralized.
Moreover, there is no legal basis for the USA to be involved in ROC’s internal affairs (certainly not militarily). Deterring PROC would require ROC to destroy its own country (via sabotage) to increase PROC’s costs of waging that war. However, sabotage can be suppressed by mass arrests and large reeducation camps for people accused of dissent (like the Uyghurs).
The USA changed its nuclear posture in 2018 by including cyberattacks as events in which they might respond nuclear. However, do nuclear powers already know what Cyberwar 2.0+ activities are below the threshold of nuclear retaliation? The problem: Hacker-AI capabilities are difficult to determine/detect. Hacker-AI might have a flexible/adaptable framework that facilitates any action. Additionally, its scale of deployment is unknown until it is (potentially) too late, in which case retaliation capabilities might be affected or even effectively sabotaged and neutralized. The speed of a cyberwar attack may not give targets any option to respond; this war could take 3 minutes or 3 seconds – nobody can know today.
A successful Cyberwar 2.0 would probably send shockwaves through the world. If ROC could be overtaken, how can the USA or some other nations protect their sovereignty? Deniability and misdirection are essential. Without proof, an uncontrolled, fear-triggered escalation close to nuclear war is conceivable, but nuclear retaliation without proof seems unlikely. Instead, a massive mobilization of technical talents is likely initiated; how fast or successful we can react depends on many unknowns.
What are superpowers doing if rogue nations share Hacker-AI-generated malware (capable of stealing crypto-keys) with gangs of international criminals or terrorists to undermine the trust in eCommerce or online banking?
5. Countermeasures
Once Hacker-AI is available, additional technologies using its data for a Cyberwar 2.0+ are not a technical barrier anymore. Mass surveillance technologies are routinely deployed in PROC and other countries to prevent terrorism – doing what Pegasus is already doing at scale is feasible. Also, extending remote control over surveillance capabilities into different countries is a quickly solvable problem. Firewalls could be blindsided and sabotaged covertly (If humans could do that, then Hacker AI as well). Developing tools to interface data provided by Hacker-AI or tools to intimidate people directly via AI-Bots is not a feature that could be prevented. With the government’s consent, there are no restrictions on how invasive AI is used to surveil a population or an occupied territory. Developing and deploying countermeasures will depend on understanding its vulnerabilities and on the level of preparation ahead of Cyberwar 2.0. Most likely, the only way to counter Hacker-AI and Cyberwar is to deploy technical guardrails (via soft- and hardware) on as many devices as possible.
Hacker-AI helps malware to be undetectable on all systems. So we must change all systems and provide defense or countermeasures via easy and safely deployable updates or retrofits. Technical measures must prevent methods/code execution provided by Hacker-AI, or they must detect actions taken by Hacker-AI, or both.
Undoubtedly, we could get into an arms race between defenders and attackers. Still, with proactive, preventative, separate, and redundant cybersecurity solutions, we could give defenders an advantage they can defend easily.
Hacker-AI activities can proactively be stopped with a strict separation of regular computation from security-related features. Hacker-AI activities are (always) loaded as executables (incl. scripts) to RAM, but apps can be hash-coded (and uniquely identified). These hashcodes are compared with cached white-/grey- (accepted) or blacklisted (rejected) hashcode values before being accepted in RAM. All unknown apps/hashcodes are rejected for execution. This feature could be deployed immediately via software updates. Additional measures are proposed to facilitate redundancy against malicious apps that slipped through the cracks.
The above-proposed measure could (quickly) be introduced via an updated OS solution (utilizing hypervisors). This approach was used in a similar context against rootkits (hooksafe). Later, white-/gray-listing could be provided by separate hardware, making Hacker-AI intrusion/activities even easier to detect. Additional features could make this initial protection method sufficiently safe against Hacker-AI and digital ghosts.
6. Conclusion
Due to the speed of providing high-adaptable malware-based cyber-weapons/solutions used in all types of IT devices, Hacker-AI is a new quality in cyber-warfare. Cyberwar 2.0+ could be considered a free lunch for countries willing to wage war. Detecting Hacker-AI or Cyberwar 2.0+ activities is only achievable with means to stop it. Malware-based (total) user surveillance, denial of service/ access, and direct citizen intimidation resulting from self-inflicted security vulnerabilities can and must be fixed immediately.
Cybersecurity must be enabled to stop nations from waging cyberwars by stopping malware, spyware, and ransomware. We need low-level proactive, preventative, and redundant cybersecurity measures that could be deployed (via updates and later via hardware retrofits) unnoticeable for regular users. However, the combined strength of cybersecurity should provide a security overkill so that we don’t have to fear Hacker-AI or Cyberwar 2.0+ anymore. We should trust digital forensics and not give hackers, governments, or criminals the tools to deceive investigators into discovering what is true.
Deterrence can only work if the costs of war remain extremely high. Cyberwar 2.0+ could fundamentally change the calculus around war. Hacker-AI could make waging war an attractive proposition.
Preventing malware and Hacker-AI from being technically usable is considered the only viable and effective strategy against Cyberwar 2.0+.