Hacker-AI is AI used in hacking. It is an advanced super-hacker tool (gaining sysadmin access in all IT devices), able to steal access credentials, crypto-keys, and any secret it needs. It could use every available feature on a device. It could also help malware to avoid detection as a digital ghost and make itself irremovable on all devices it visited. The new quality of Hacker-AI arises from the speed at which it develops new malware solutions on every available platform (CPU/OS).
In previous posts on “Hacker-AI and Digital Ghosts – Pre-ASI” and “Hacker-AI – Does it already exist?”, I have hypothesized that Hacker-AI is feasible, already developed, or potentially ready to be deployed. This Hacker-AI could threaten our privacy, freedom, and safety while being an attractive cyber weapon for governments intending to wage cyberwar on other countries. Moreover, the first Hacker-AI could be the only one, i.e., denying all late-coming systems or software-based countermeasures full access; it could give its operators permanent global supremacy over their international rivals.
In “Hacker-AI and Cyberwar 2.0+”, I have discussed the use of Hacker-AI in surveillance, in selected denial of service/access, and in personalized threats against non-military personnel via AI bots, incl. the use of deep fakes in communication or publication. Cyberwar 2.0+ would make any destructive war action unnecessary. The reduction of costly war consequences is making war more likely. Also, misdirection can hide the culprits responsible for waging a cyberwar so that innocent parties could become targets in retaliation.
In a solution-oriented post, “Improved Security to Prevent Hacker-AI and Digital Ghosts”, I proposed several technologies against Hacker-AI. The development, production, distribution, and deployment of counterdefenses will take time. I was cautioning that activities leading to countermeasures could be sabotaged or even made impossible (in the worst case) if unprotected developers or manufacturers are harassed or actively attacked by deployed Hacker-AI.
So, could we be too late to develop countermeasures to Hacker-AI, particularly if digital ghosts are already out there? What could we do if we were late?
1. Threat-Level
Predicting future events or capabilities is impossible. Still, we can proactively categorize Hacker-AI-related scenarios into threat levels, in which we need to apply or prepare for different protective measures.
Threat-Level 0 (TL-0) is when we assume that there is no advanced threat from an already existing Hacker-AI (yet) that can sabotage the development, production, distribution, or deployment.
Threat-Level 1 (TL-1) assumes a slight chance that an advanced Hacker-AI was developed and is possibly deployed to sabotage the development up to the deployment of countermeasures against Hacker-AI. Even if most experts agree that we are still in TL-0, it is a matter of professional prudence to assume that adversarial Hacker-AI interferes with us having reliable countermeasures.
Threat-Level 2 (TL-2) is called out and declared internally (not publicly), i.e., among anyone involved with the development, production, distribution, or deployment of Hacker-AI countermeasures. As soon as there is irrefutable evidence that an advanced digital ghost has modified the OS to remain hidden, TL-2 is declared. With TL-2, all steps to final deployment must be done repeatedly on (multiple) different, isolated, non-networkable systems with older software versions from immutable storage media (old CDs). To-be-developed, simplified hardware systems are used to assure that used harddrives are completely overwritten and uncompromised. Other (new/simplified) tools are used to ensure that the BIOS is not compromised and that all used hardware components are older. Also, data transfer must happen with immutable data-storage media (e.g., CD-RWs read by older CD-R drives only). All transfer media are archived so that additional (attack) data are eventually revealed with to-be-developed tools. We cannot prevent early Hacker-AI interference, but we should be enabled to detect and remove them later. In TL-2, every step toward deployment of countermeasures is distrusted and analyzed many years later with more advanced and secure tools. Finally, if key contributors, producers, or facilitators were attacked directly by malware/Hacker-AI, then TL-2 should internally be elevated to an emergency level (TL-2X). Elevated protection measures from the next level (TL-3) are used to protect people and deliverables with more (non-electronic) resources.
Threat-Level 3 (TL-3) is when Hacker-AI is already used within a Cyberwar 2.0+ to occupy another country. Public discussions about possible advanced capabilities or vulnerabilities would create a shockwave or even panic moment among leaders, media, and helpless citizens. Cyberwar 2.0+ events would create significant global shockwaves. It would indicate that no computer and no defense system is good enough protected to prevent Hacker-AI from being used against the civilian population, business, government, military, or additional countries. Although fear and uncertainty could lead to nuclear war/retaliation, it is assumed that engineers and scientists from non-occupied or occupying countries would work together tirelessly to limit Hacker-AI’s scope of capabilities. Public announcement of TL-3 would trigger a significant change in the civil defense posture, as every network-connected device could turn hostile. Citizens would need to be trained to be extra vigilant about their surroundings and deactivate as many electronic devices as possible. In general, we would be advised to depend more on older, less capable devices until countermeasures are in place.
Threat-Level 4 (TL-4) is when Hacker-AI has effectively defeated all proponents or forces providing countermeasures against Hacker-AI. Surveillance and collaborators instructed to destroy all possible countermeasures within the development, production, distribution, deployment, and usage would prevent any change to circumvent comprehensive surveillance.
In TL-1, we are dealing with an imaginary digital ghost. Developers would entertain scenarios from which we don’t know how realistic they are. Therefore, our assumptions may overestimate Hacker-AI’s current or future capabilities, but we would act as if this Hacker-AI waits for a chance to interfere against us adversarially.
Starting with TL-2, we are dealing with real cyber threats that we should not underestimate – so, we might overestimate them. But Hacker-AI has self-improving capabilities supporting or supported by smart operators focused on defending their position of global supremacy at (almost) all costs. Knowing Hacker-AI exists would elevate the urgency for establishing effective countermeasures (globally) to new heights.
If the USA or some other country (with a liberal system) has or uses Hacker-AI for defensive, retaliating purposes, then we should hope that they are also using it on the side of supporting the development of comprehensive countermeasures. Possibly, governments’ capabilities are being provided for digital protection around all systems involved in developing, producing, distributing, and deploying countermeasures as part of their TL-2 or TL-3 support.
The most we could expect from the to-be-developed countermeasures is to stop nations from successfully waging Cyberwar 2.0+ on other countries (new targets). The liberation of countries already subject to Hacker-AI-controlled surveillance is likely impossible. Also, all countermeasures are purely defensive.
We are not discussing criteria for “too-late” or a global TL-4 (defeat) situation. However, there is probably a tipping point where “too late” or defeat is an appropriate description.
2. Security Measures
In levels TL-1 and above, we would need comprehensive security for all people involved in the product development, manufacturing, distribution, and deployment of the countermeasures. They should feel safe, i.e., free from harm or threat of harm. Police and other security organizations must help create conditions in which all involved people are safe and protected regardless of their level or importance of contribution.
Personal protection for the directly involved and their immediate family members must be monitored and safeguarded from unfounded accusations and dangerous or intimidating acts like swatting in which called police use SWAT teams to raid the house of innocent people. It is conceivable that some key contributors must be more vigilant and isolated so that they are not being attacked via drones, fire, poison, etc. It would be best if key people’s physical location could be hidden from surveillance and any data traces while they still have the means to communicate safely.
At least in the initial phase, the casual and thoughtless use of networks or removable storage drives should be over for people working on the countermeasures. They should be aware of possible threats to their lives/safety or privacy from electronic equipment in their surroundings.
Offline tools should be developed so that people receive sufficient protection against unjustified accusations, fabricated evidence, falsification, or deep fakes using simple/reliable evidence preservation methods.
In more existential Level TL-2X or TL-3 situations, key contributors should be trained in best practice methods of going dark for weeks or months or even being protected by trained but inconspicuous security teams. These measures seem overkill, but we should not take chances if humanity’s future of living in freedom depends on it. We should prepare ahead for these situations while any digital, identifiable footprints must be avoided in the run-up. The US Air-force has created secret test sides, like Area-51; the US cyber command should work on similar sides to develop soft- and hardware with the help of pre-identified or recruited experts in protected and well-equipped environments.
Hacker-AI could covertly modify software or hardware products with compromised tools; humans would not detect that because they cannot read or understand compiled machine code. Updates, production, distribution, or deployment processes are too complicated for humans to understand/validate comprehensively.
All soft- and hardware used in countermeasures are being developed as open source. Dedicated experts can scrutinize and improve existing results and deliverables continuously. This method is more efficient than backroom code reviews or security audits for receiving certifications.
All code should be compiled independently by different developers and systems and checked/compared continuously for additional features slipped in by Hacker-AI. However, if many experts watch over simplified features and code, the probability that suspicious (hidden/malicious) features are being detected increases significantly. With the detection that Hacker-AI has penetrated code or compilation security, experts would hunt down the source of these manipulations and remove them.
3. Protection of Development
The purpose of the proposed methods is to continue the development of countermeasures when sabotaging these activities is a real possibility. We assume that the most significant method of attack by Hacker-AI is code modification, i.e., leaving backdoors or sleeper code in the to-be-developed/-deployed security code. The goal is, therefore, (A) to prepare for late sabotage detection, (B) to repair damages or consequences after detection quickly, and try to reduce Hacker-AI’s possible impact via (C) clean-room-type soft-/hardware environments for developers.
The scope of the development effort is broad. It contains all activities that could be changed before fixed code or products are produced, distributed, or deployed via fixed, scalable, and immutable processes. Protecting these processes against covert changes is part of the development.
(A) Preparation for Late Sabotage Detection
It is assumed that sabotage cannot be detected when it happens. Hacker-AI is assumed to be on a machine when developers code their solutions. The problem is not the loss of privacy or secrecy when developers write their code; the code is already open source. The problem is that compiled code gets some additional (hidden) features just before, during, or after compilation. Due to the system’s complexity, developers are almost helpless in detecting possible backdoor code inserted by Hacker-AI. It is also assumed that Hacker-AI creates several layers of protection around this compromised, compiled code by making validating the compiled software by humans impossible, which is realistic if we assume Hacker-AI is already ubiquitous.
We assume that developers know their source code very well and would detect suspicious code changes. Therefore, Hacker-AI would reveal its existence if new code or features appeared within files that should not contain that code or provide that feature. Collaboration among several experts is done transparently; other developers see every change. New/changed code is always assigned to someone responsible for it.
Recently, supply chain security in open source was improved by several services, incl. GitHub. Unfortunately, the security is based on Public/Private Key (PPK) systems. Using PPK against Hacker-AI, from which we assume it can steal crypto-keys covertly, is not enough – we must demand that no crypto-key is exported or seen in cleartext or processed in a regular/shared CPU.
Still, if we cannot check compiled code for hidden code modifications immediately and comprehensively, we can create physically immutable data, e.g., on a CD, to be checked later by new hardware tools. These hardware tools don’t need to exist when these immutable files are being created. Later, these files can serve as irrefutable evidence for detecting Hacker-AI activities; or we could confirm that there was no Hacker-AI interference. It will be challenging to develop tools we can trust and give us independent confirmation.
These validation or confirmation tools must consist of simplified hardware with only required software features, i.e., code that is always/regularly being used. These tools should have no multi-tasking or -threading capabilities. Additionally, their RAM should be strictly segmented into a range with executable data only and another range with data to be processed (Harvard architecture). The executable code in these tools is simple, well structured (i.e., on a machine language level), and transparent to the outside so that qualified experts can do in-depth inspections anytime. Furthermore, users must be sure that no covert changes happen in-between inspections, which is guaranteed if there is an air gap (physical separation from the network) between the device and how it receives data.
For validation, these tools could, e.g., prove the congruence of features (as defined in the source code) with features provided/defined in the compiled code, i.e., no deviation or modification from an attack is within the compiled code. Security must be more important than efficiency in using this tool.
(B) Instant Repair of Damages
With the detection of covert modifications in security code, we will use additionally (persistently) stored information to detect problems with the compromised tool or tools used in that attack. Once the tool is fixed, we need to be able to fix the security code, i.e., recompile and distribute it to all compromised instances automatically. Also, we need to be sure that automation or distributed updates are not creating new security breaches.
This process of detecting problems, fixing, and redeploying solutions is essential for mitigating damages from attacks immediately. We need methods to flag (not fixed) devices as potentially unreliable. Security code is stored immutable (for attackers) and mutable by defenders with multiple independent/redundant checks. We must put extra effort into developing or deploying tool features for detecting or revealing attacker code/features that the attackers could not know when they designed their attacks.
Attackers are prevented from modifying low-level security codes, i.e., they cannot adapt to new detection methods. Without these changes, Hacker-AI’s security around the protection of its attack method would eventually fail.
(C) Hacker-AI Impact reduction via “Digital-Clean-Rooms.”
All security or countermeasure tools, their code, and all information related to these tools are open-source. We do not need secrecy around any component. All algorithms are isolated from the main OS and each other. The source code is simplified concerning internal complexity and features; it is not (prematurely) optimized for marginal performance gains. Every change is scrutinized for malicious intent or unnecessary features.
Still, source code is being written with tools, compiled, and distributed with other tools. Each tool the code came in touch with, including software that was present in RAM simultaneously, is logged via name, metadata, and its binary hashcode value. However, security-critical incidents could happen when, e.g., new security software and the generated hashcode, uniquely representing the security software, are generated simultaneously or in coordination by an attacker. Initially, we must accept that attackers could fool us. Methods of archiving/storing data about new security software, i.e., compiled security software and its hashcodes, are vulnerable to attacks despite all measures we could use to protect us.
Changes to the development, compilation, or distribution environment must be made more difficult using specially compiled Linux kernels that automatically track hashcodes of all loaded executable files. Continuous tracking of hashcodes and logging every change by storing it reliably on physically immutable storage media will preserve attack traces. These data are later analyzed via tools on simplified devices, e.g., a RISC V and simplified software for that system. Over time, we get a cleaner digital clean room.
Additionally, some developers intentionally use simplified devices for their regular work. They would separate their coding and code compilation on different devices. Transferring data between these systems could take additional time and go against developer’s propensity for efficiency, but security and code integrity have priority. These systems would have no hardware for wireless network support. Cable-based Ethernet should be physically disabled – the same applies to internal mics or cameras. Also, every unused USB connector is disabled as well.
Like hardware manufacturers, software developers (working on security) should also move their source code into clean digital rooms where suspicious, compromised code is quickly detected.
The expectation is that partial security/countermeasure solutions would throttle down the impact of Hacker-AI. Suppose this approach works, it could increase our confidence in the integrity and reliability of less compromised solutions on next-generation devices step-by-step. However, it is unknown if this partial reduction of undetected impact by Hacker-AI is feasible, but it seems it is the best we can do for now.
We assume that increasing security is done by simplifying devices with no unnecessary interface. Less complex processors, smaller RAM utilization by a non-multitasking OS, and fewer features are helping us toward this goal. We may also take a closer look at some performance optimizations and remove them for simplicity within independent reviews.
4. Protection of Manufacturing, Distribution, and Deployment
Software deployment via automated updates is not a distribution problem because delivery happens via the ubiquitous internet. However, software-based updates might come too late and would not eliminate irremovable malware/Hacker-AI from the system. In TL-2 or TL-3, this problem must be accepted. However, these software-based countermeasures must still be distributed because they set the foundation for an independent hardware security solution that uses the same hashcodes for its white-/gray- and blacklisting.
Hardware-based security solutions will not require high-end technology or manufacturing equipment; it is assumed that they could be produced quickly within a war-effort-level utilization of different manufacturing facilities. The biggest problem is to prevent or suppress malware-based sabotage. Unfortunately, time-consuming interruptions from malware won’t happen before the equipment or systems are used. If critical computerized systems are isolated, potentially even from each other, we could test them and have malware activate itself prematurely.
Trained professionals prepare organizations with advice on workplace security and safety measures. Similarly, cybersecurity professionals should reveal threats from Cyberwar 2.0+ in every organization involved with the countermeasures. Initially, we could have a lot of ineffective improvising due to a lack of guidance and misunderstanding of how Hacker-AI is spreading malware. But the full mobilization of people trying to fix problems from different sides could show some (surprise) breakthroughs over time.
An (open-source) expert/development community could educate people dealing with software and network dependency that contributes to vulnerabilities critical within the development, production, distribution, and deployment of countermeasures. A dynamic exchange between people at the forefront and experts knowing about possible system vulnerabilities could provide improved solutions that isolate or fix processes within production, distribution, and deployment of the security hardware from targeted attacks.
In TL-1, many professionals will not take the threat of Hacker-AI interference seriously enough. Even if there are signs of Hacker-AI interference, most people within the chain of production, distribution, and deployment would likely wait for TL-2X or TL-3 events to take place until they are taking active participation in advanced security measures. Then they might be ready to accept the inconvenience and pain of isolating equipment from the network. Unfortunately, that might be too late because its software might be compromised with difficult-to-detect malware that interferes with reliable tools/hardware delivery.
The struggle to deliver sufficiently good countermeasure tools could go on over many years, in which countries, businesses, and peoples are potentially at risk of being attacked or damaged by Cyberwar 2.0+ tools. The reason was that measures to protect systems/devices started too late.
5. Discussion
Starting the development with a TL-1 assumption is prudent. It won’t have significant implications for people outside the development of countermeasures. It will give professionals a new perspective on vulnerabilities within their development, production, distribution, and deployment processes. The proposed protection measures, (A) Late Sabotage Detection, (B) Instant Damage Repair, and (C) Digital-Clean-Rooms, are part of the development within TL-1. These measures can also be used under higher Threat-levels but with a more severe focus on device isolation and deactivation or control of unnecessary device interfaces.
The development could be (slightly) slowed down due to TL-1 security measures. However, development is done in parallel and initially with low or no security (i.e., TL-0); we would likely have deployable results quickly. Other teams of developers are working on hardening the entire development/deployment process with soft- and hardware tools. The developed countermeasure solutions are independently validated as soon as more secure developer environments are available.
Detecting malware within the development process or later within the deployment is not a reason to assume we already have a TL-2 situation. It should require evidence or a credible whistleblower to call out this level and to assume adversarial Hacker-AI was deployed against efforts to create countermeasures. We need to detect malware with ghost-like features or show more flexibility and variety in gaining access to systems with multiple user-role/privilege elevation methods. We would need to see the utilization of reverse code engineering (RCE) that modifies features in existing apps.
Currently, zero-day vulnerabilities (0-Days) are very expensive as they are found by hackers manually. Using 0-Days or having (expert-level) defenders know about them makes 0-Days quickly useless or worthless. Suppose we would see many more attacks with different 0-Days or RCE in combination with code-modifying attacks on the development of any countermeasure component; experts seeing the numbers and evidence could then call for an internal elevation to TL-2.
We assumed that we could find technical measures within TL-1 and TL-2 that are sufficient to protect the countermeasure deliverables; however, this might be a longer, iterative, and potentially competitive process in which we need to compare over a longer period results. Additionally, because of the heightened security warnings, developers will take security measures and processes more seriously, i.e., they will do many more code/system checks than they would otherwise.
Repeated checks might slow down the countermeasure’s deployment. Over time, protective solutions within developers’ environments will detect attacks (eventually). They will not contribute to additional vulnerabilities in solutions if we prepare to fix the underlying issues immediately and safely. Different experts’ intense scrutiny at every step will likely remove most problems at some point, but not necessarily within version 1 of the countermeasures. We hypothesize that version 1 will be redundant enough to facilitate covert change protection, limit damages, and refine operational experiences to make version 2 much safer.
However, operators behind the adversarial Hacker-AI could start directly threatening or harming key people within the development. Offline tools protecting developers should then be capable of gathering this evidence reliably. With evidence, we would then announce TL-2X internally; all people involved must be informed that malicious and personal attacks trying to prevent the development and completion of countermeasures are likely. How people are protected or protect themselves is beyond this paper’s scope, but professional advice and support are likely warranted. Operational plans to protect people and product development at TL-3 should be developed as soon as possible.
As soon as developers are forced to protect themselves, their families, and the physical integrity of used equipment or buildings (i.e., anything bad we could expect in TL-2X or TL-3/4), we must assume that the development, production, and deployment could be slowed down significantly.
Physically handing-over results from one developer to another must happen directly or with trusted (human) intermediaries because third-party logistics cannot be trusted. An additional post will discuss “Early Preparation for Cyberwar 2.0+”.
Delivering results under constant assaults of Hacker-AI and Cyberwar 2.0+ capabilities is a tall order. It would probably depend on the preparation quality ahead of Hacker-AI’s emergence/deployment if experts can deliver results, particularly by providing countermeasure hardware. I am skeptical and would be very surprised if defenders could protect the countermeasure development and its production if they start after a country has attacked another via a cyberwar 2.0+ (i.e., if they must start with the assumption of being in TF-3).
6. Conclusion
It is currently difficult to discuss concrete features or steps to protect the development of security tools from an adversary that could be hidden in all systems. It watches for vulnerabilities and opportunities to modify (generic) security measures covertly. It may also have tools/capabilities to hide these security-lowering modifications over a longer period. Unfortunately, this scenario is not unrealistic. Technology in our IT ecosystem is complex, and we have human operators with strong incentives to take full advantage of Hacker-AI permanently. Hacker-AI could continuously be improved. It could potentially go through an uncontrolled intelligence explosion, after which it is eventually much smarter than its human operators and independent of human instructions.
Additionally, we are dealing with many unknowns, and many iterations are required for defenders preparing for a threat of that magnitude. The sooner we develop hardware-based security for our IT devices, the easier we can produce, distribute and deploy improved security.
Security is an arms race. We may solve some problems if we are too late. But if we are (really) too late, we may never catch up. We may need to fight against the advanced tools of an adversary determined to take advantage of our vulnerabilities. In that situation, it is obvious: nothing will change the fact that we were too late.
Safe Development of Hacker-AI Countermeasures – What if we are too late?
Hacker-AI is AI used in hacking. It is an advanced super-hacker tool (gaining sysadmin access in all IT devices), able to steal access credentials, crypto-keys, and any secret it needs. It could use every available feature on a device. It could also help malware to avoid detection as a digital ghost and make itself irremovable on all devices it visited. The new quality of Hacker-AI arises from the speed at which it develops new malware solutions on every available platform (CPU/OS).
In previous posts on “Hacker-AI and Digital Ghosts – Pre-ASI” and “Hacker-AI – Does it already exist?”, I have hypothesized that Hacker-AI is feasible, already developed, or potentially ready to be deployed. This Hacker-AI could threaten our privacy, freedom, and safety while being an attractive cyber weapon for governments intending to wage cyberwar on other countries. Moreover, the first Hacker-AI could be the only one, i.e., denying all late-coming systems or software-based countermeasures full access; it could give its operators permanent global supremacy over their international rivals.
In “Hacker-AI and Cyberwar 2.0+”, I have discussed the use of Hacker-AI in surveillance, in selected denial of service/access, and in personalized threats against non-military personnel via AI bots, incl. the use of deep fakes in communication or publication. Cyberwar 2.0+ would make any destructive war action unnecessary. The reduction of costly war consequences is making war more likely. Also, misdirection can hide the culprits responsible for waging a cyberwar so that innocent parties could become targets in retaliation.
In a solution-oriented post, “Improved Security to Prevent Hacker-AI and Digital Ghosts”, I proposed several technologies against Hacker-AI. The development, production, distribution, and deployment of counterdefenses will take time. I was cautioning that activities leading to countermeasures could be sabotaged or even made impossible (in the worst case) if unprotected developers or manufacturers are harassed or actively attacked by deployed Hacker-AI.
So, could we be too late to develop countermeasures to Hacker-AI, particularly if digital ghosts are already out there? What could we do if we were late?
1. Threat-Level
Predicting future events or capabilities is impossible. Still, we can proactively categorize Hacker-AI-related scenarios into threat levels, in which we need to apply or prepare for different protective measures.
Threat-Level 0 (TL-0) is when we assume that there is no advanced threat from an already existing Hacker-AI (yet) that can sabotage the development, production, distribution, or deployment.
Threat-Level 1 (TL-1) assumes a slight chance that an advanced Hacker-AI was developed and is possibly deployed to sabotage the development up to the deployment of countermeasures against Hacker-AI. Even if most experts agree that we are still in TL-0, it is a matter of professional prudence to assume that adversarial Hacker-AI interferes with us having reliable countermeasures.
Threat-Level 2 (TL-2) is called out and declared internally (not publicly), i.e., among anyone involved with the development, production, distribution, or deployment of Hacker-AI countermeasures. As soon as there is irrefutable evidence that an advanced digital ghost has modified the OS to remain hidden, TL-2 is declared.
With TL-2, all steps to final deployment must be done repeatedly on (multiple) different, isolated, non-networkable systems with older software versions from immutable storage media (old CDs).
To-be-developed, simplified hardware systems are used to assure that used harddrives are completely overwritten and uncompromised. Other (new/simplified) tools are used to ensure that the BIOS is not compromised and that all used hardware components are older.
Also, data transfer must happen with immutable data-storage media (e.g., CD-RWs read by older CD-R drives only). All transfer media are archived so that additional (attack) data are eventually revealed with to-be-developed tools. We cannot prevent early Hacker-AI interference, but we should be enabled to detect and remove them later. In TL-2, every step toward deployment of countermeasures is distrusted and analyzed many years later with more advanced and secure tools.
Finally, if key contributors, producers, or facilitators were attacked directly by malware/Hacker-AI, then TL-2 should internally be elevated to an emergency level (TL-2X). Elevated protection measures from the next level (TL-3) are used to protect people and deliverables with more (non-electronic) resources.
Threat-Level 3 (TL-3) is when Hacker-AI is already used within a Cyberwar 2.0+ to occupy another country. Public discussions about possible advanced capabilities or vulnerabilities would create a shockwave or even panic moment among leaders, media, and helpless citizens. Cyberwar 2.0+ events would create significant global shockwaves. It would indicate that no computer and no defense system is good enough protected to prevent Hacker-AI from being used against the civilian population, business, government, military, or additional countries.
Although fear and uncertainty could lead to nuclear war/retaliation, it is assumed that engineers and scientists from non-occupied or occupying countries would work together tirelessly to limit Hacker-AI’s scope of capabilities.
Public announcement of TL-3 would trigger a significant change in the civil defense posture, as every network-connected device could turn hostile. Citizens would need to be trained to be extra vigilant about their surroundings and deactivate as many electronic devices as possible. In general, we would be advised to depend more on older, less capable devices until countermeasures are in place.
Threat-Level 4 (TL-4) is when Hacker-AI has effectively defeated all proponents or forces providing countermeasures against Hacker-AI. Surveillance and collaborators instructed to destroy all possible countermeasures within the development, production, distribution, deployment, and usage would prevent any change to circumvent comprehensive surveillance.
In TL-1, we are dealing with an imaginary digital ghost. Developers would entertain scenarios from which we don’t know how realistic they are. Therefore, our assumptions may overestimate Hacker-AI’s current or future capabilities, but we would act as if this Hacker-AI waits for a chance to interfere against us adversarially.
Starting with TL-2, we are dealing with real cyber threats that we should not underestimate – so, we might overestimate them. But Hacker-AI has self-improving capabilities supporting or supported by smart operators focused on defending their position of global supremacy at (almost) all costs. Knowing Hacker-AI exists would elevate the urgency for establishing effective countermeasures (globally) to new heights.
If the USA or some other country (with a liberal system) has or uses Hacker-AI for defensive, retaliating purposes, then we should hope that they are also using it on the side of supporting the development of comprehensive countermeasures. Possibly, governments’ capabilities are being provided for digital protection around all systems involved in developing, producing, distributing, and deploying countermeasures as part of their TL-2 or TL-3 support.
The most we could expect from the to-be-developed countermeasures is to stop nations from successfully waging Cyberwar 2.0+ on other countries (new targets). The liberation of countries already subject to Hacker-AI-controlled surveillance is likely impossible. Also, all countermeasures are purely defensive.
We are not discussing criteria for “too-late” or a global TL-4 (defeat) situation. However, there is probably a tipping point where “too late” or defeat is an appropriate description.
2. Security Measures
In levels TL-1 and above, we would need comprehensive security for all people involved in the product development, manufacturing, distribution, and deployment of the countermeasures. They should feel safe, i.e., free from harm or threat of harm. Police and other security organizations must help create conditions in which all involved people are safe and protected regardless of their level or importance of contribution.
Personal protection for the directly involved and their immediate family members must be monitored and safeguarded from unfounded accusations and dangerous or intimidating acts like swatting in which called police use SWAT teams to raid the house of innocent people. It is conceivable that some key contributors must be more vigilant and isolated so that they are not being attacked via drones, fire, poison, etc. It would be best if key people’s physical location could be hidden from surveillance and any data traces while they still have the means to communicate safely.
At least in the initial phase, the casual and thoughtless use of networks or removable storage drives should be over for people working on the countermeasures. They should be aware of possible threats to their lives/safety or privacy from electronic equipment in their surroundings.
Offline tools should be developed so that people receive sufficient protection against unjustified accusations, fabricated evidence, falsification, or deep fakes using simple/reliable evidence preservation methods.
In more existential Level TL-2X or TL-3 situations, key contributors should be trained in best practice methods of going dark for weeks or months or even being protected by trained but inconspicuous security teams. These measures seem overkill, but we should not take chances if humanity’s future of living in freedom depends on it. We should prepare ahead for these situations while any digital, identifiable footprints must be avoided in the run-up. The US Air-force has created secret test sides, like Area-51; the US cyber command should work on similar sides to develop soft- and hardware with the help of pre-identified or recruited experts in protected and well-equipped environments.
Hacker-AI could covertly modify software or hardware products with compromised tools; humans would not detect that because they cannot read or understand compiled machine code. Updates, production, distribution, or deployment processes are too complicated for humans to understand/validate comprehensively.
All soft- and hardware used in countermeasures are being developed as open source. Dedicated experts can scrutinize and improve existing results and deliverables continuously. This method is more efficient than backroom code reviews or security audits for receiving certifications.
All code should be compiled independently by different developers and systems and checked/compared continuously for additional features slipped in by Hacker-AI. However, if many experts watch over simplified features and code, the probability that suspicious (hidden/malicious) features are being detected increases significantly. With the detection that Hacker-AI has penetrated code or compilation security, experts would hunt down the source of these manipulations and remove them.
3. Protection of Development
The purpose of the proposed methods is to continue the development of countermeasures when sabotaging these activities is a real possibility. We assume that the most significant method of attack by Hacker-AI is code modification, i.e., leaving backdoors or sleeper code in the to-be-developed/-deployed security code. The goal is, therefore, (A) to prepare for late sabotage detection, (B) to repair damages or consequences after detection quickly, and try to reduce Hacker-AI’s possible impact via (C) clean-room-type soft-/hardware environments for developers.
The scope of the development effort is broad. It contains all activities that could be changed before fixed code or products are produced, distributed, or deployed via fixed, scalable, and immutable processes. Protecting these processes against covert changes is part of the development.
(A) Preparation for Late Sabotage Detection
It is assumed that sabotage cannot be detected when it happens. Hacker-AI is assumed to be on a machine when developers code their solutions. The problem is not the loss of privacy or secrecy when developers write their code; the code is already open source. The problem is that compiled code gets some additional (hidden) features just before, during, or after compilation. Due to the system’s complexity, developers are almost helpless in detecting possible backdoor code inserted by Hacker-AI. It is also assumed that Hacker-AI creates several layers of protection around this compromised, compiled code by making validating the compiled software by humans impossible, which is realistic if we assume Hacker-AI is already ubiquitous.
We assume that developers know their source code very well and would detect suspicious code changes. Therefore, Hacker-AI would reveal its existence if new code or features appeared within files that should not contain that code or provide that feature. Collaboration among several experts is done transparently; other developers see every change. New/changed code is always assigned to someone responsible for it.
Recently, supply chain security in open source was improved by several services, incl. GitHub. Unfortunately, the security is based on Public/Private Key (PPK) systems. Using PPK against Hacker-AI, from which we assume it can steal crypto-keys covertly, is not enough – we must demand that no crypto-key is exported or seen in cleartext or processed in a regular/shared CPU.
Still, if we cannot check compiled code for hidden code modifications immediately and comprehensively, we can create physically immutable data, e.g., on a CD, to be checked later by new hardware tools. These hardware tools don’t need to exist when these immutable files are being created. Later, these files can serve as irrefutable evidence for detecting Hacker-AI activities; or we could confirm that there was no Hacker-AI interference. It will be challenging to develop tools we can trust and give us independent confirmation.
These validation or confirmation tools must consist of simplified hardware with only required software features, i.e., code that is always/regularly being used. These tools should have no multi-tasking or -threading capabilities. Additionally, their RAM should be strictly segmented into a range with executable data only and another range with data to be processed (Harvard architecture). The executable code in these tools is simple, well structured (i.e., on a machine language level), and transparent to the outside so that qualified experts can do in-depth inspections anytime. Furthermore, users must be sure that no covert changes happen in-between inspections, which is guaranteed if there is an air gap (physical separation from the network) between the device and how it receives data.
For validation, these tools could, e.g., prove the congruence of features (as defined in the source code) with features provided/defined in the compiled code, i.e., no deviation or modification from an attack is within the compiled code. Security must be more important than efficiency in using this tool.
(B) Instant Repair of Damages
With the detection of covert modifications in security code, we will use additionally (persistently) stored information to detect problems with the compromised tool or tools used in that attack. Once the tool is fixed, we need to be able to fix the security code, i.e., recompile and distribute it to all compromised instances automatically. Also, we need to be sure that automation or distributed updates are not creating new security breaches.
This process of detecting problems, fixing, and redeploying solutions is essential for mitigating damages from attacks immediately. We need methods to flag (not fixed) devices as potentially unreliable. Security code is stored immutable (for attackers) and mutable by defenders with multiple independent/redundant checks. We must put extra effort into developing or deploying tool features for detecting or revealing attacker code/features that the attackers could not know when they designed their attacks.
Attackers are prevented from modifying low-level security codes, i.e., they cannot adapt to new detection methods. Without these changes, Hacker-AI’s security around the protection of its attack method would eventually fail.
(C) Hacker-AI Impact reduction via “Digital-Clean-Rooms.”
All security or countermeasure tools, their code, and all information related to these tools are open-source. We do not need secrecy around any component. All algorithms are isolated from the main OS and each other. The source code is simplified concerning internal complexity and features; it is not (prematurely) optimized for marginal performance gains. Every change is scrutinized for malicious intent or unnecessary features.
Still, source code is being written with tools, compiled, and distributed with other tools. Each tool the code came in touch with, including software that was present in RAM simultaneously, is logged via name, metadata, and its binary hashcode value. However, security-critical incidents could happen when, e.g., new security software and the generated hashcode, uniquely representing the security software, are generated simultaneously or in coordination by an attacker. Initially, we must accept that attackers could fool us. Methods of archiving/storing data about new security software, i.e., compiled security software and its hashcodes, are vulnerable to attacks despite all measures we could use to protect us.
Changes to the development, compilation, or distribution environment must be made more difficult using specially compiled Linux kernels that automatically track hashcodes of all loaded executable files. Continuous tracking of hashcodes and logging every change by storing it reliably on physically immutable storage media will preserve attack traces. These data are later analyzed via tools on simplified devices, e.g., a RISC V and simplified software for that system. Over time, we get a cleaner digital clean room.
Additionally, some developers intentionally use simplified devices for their regular work. They would separate their coding and code compilation on different devices. Transferring data between these systems could take additional time and go against developer’s propensity for efficiency, but security and code integrity have priority. These systems would have no hardware for wireless network support. Cable-based Ethernet should be physically disabled – the same applies to internal mics or cameras. Also, every unused USB connector is disabled as well.
Like hardware manufacturers, software developers (working on security) should also move their source code into clean digital rooms where suspicious, compromised code is quickly detected.
The expectation is that partial security/countermeasure solutions would throttle down the impact of Hacker-AI. Suppose this approach works, it could increase our confidence in the integrity and reliability of less compromised solutions on next-generation devices step-by-step. However, it is unknown if this partial reduction of undetected impact by Hacker-AI is feasible, but it seems it is the best we can do for now.
We assume that increasing security is done by simplifying devices with no unnecessary interface. Less complex processors, smaller RAM utilization by a non-multitasking OS, and fewer features are helping us toward this goal. We may also take a closer look at some performance optimizations and remove them for simplicity within independent reviews.
4. Protection of Manufacturing, Distribution, and Deployment
Software deployment via automated updates is not a distribution problem because delivery happens via the ubiquitous internet. However, software-based updates might come too late and would not eliminate irremovable malware/Hacker-AI from the system. In TL-2 or TL-3, this problem must be accepted. However, these software-based countermeasures must still be distributed because they set the foundation for an independent hardware security solution that uses the same hashcodes for its white-/gray- and blacklisting.
Hardware-based security solutions will not require high-end technology or manufacturing equipment; it is assumed that they could be produced quickly within a war-effort-level utilization of different manufacturing facilities. The biggest problem is to prevent or suppress malware-based sabotage. Unfortunately, time-consuming interruptions from malware won’t happen before the equipment or systems are used. If critical computerized systems are isolated, potentially even from each other, we could test them and have malware activate itself prematurely.
Trained professionals prepare organizations with advice on workplace security and safety measures. Similarly, cybersecurity professionals should reveal threats from Cyberwar 2.0+ in every organization involved with the countermeasures. Initially, we could have a lot of ineffective improvising due to a lack of guidance and misunderstanding of how Hacker-AI is spreading malware. But the full mobilization of people trying to fix problems from different sides could show some (surprise) breakthroughs over time.
An (open-source) expert/development community could educate people dealing with software and network dependency that contributes to vulnerabilities critical within the development, production, distribution, and deployment of countermeasures. A dynamic exchange between people at the forefront and experts knowing about possible system vulnerabilities could provide improved solutions that isolate or fix processes within production, distribution, and deployment of the security hardware from targeted attacks.
In TL-1, many professionals will not take the threat of Hacker-AI interference seriously enough. Even if there are signs of Hacker-AI interference, most people within the chain of production, distribution, and deployment would likely wait for TL-2X or TL-3 events to take place until they are taking active participation in advanced security measures. Then they might be ready to accept the inconvenience and pain of isolating equipment from the network. Unfortunately, that might be too late because its software might be compromised with difficult-to-detect malware that interferes with reliable tools/hardware delivery.
The struggle to deliver sufficiently good countermeasure tools could go on over many years, in which countries, businesses, and peoples are potentially at risk of being attacked or damaged by Cyberwar 2.0+ tools. The reason was that measures to protect systems/devices started too late.
5. Discussion
Starting the development with a TL-1 assumption is prudent. It won’t have significant implications for people outside the development of countermeasures. It will give professionals a new perspective on vulnerabilities within their development, production, distribution, and deployment processes. The proposed protection measures, (A) Late Sabotage Detection, (B) Instant Damage Repair, and (C) Digital-Clean-Rooms, are part of the development within TL-1. These measures can also be used under higher Threat-levels but with a more severe focus on device isolation and deactivation or control of unnecessary device interfaces.
The development could be (slightly) slowed down due to TL-1 security measures. However, development is done in parallel and initially with low or no security (i.e., TL-0); we would likely have deployable results quickly. Other teams of developers are working on hardening the entire development/deployment process with soft- and hardware tools. The developed countermeasure solutions are independently validated as soon as more secure developer environments are available.
Detecting malware within the development process or later within the deployment is not a reason to assume we already have a TL-2 situation. It should require evidence or a credible whistleblower to call out this level and to assume adversarial Hacker-AI was deployed against efforts to create countermeasures. We need to detect malware with ghost-like features or show more flexibility and variety in gaining access to systems with multiple user-role/privilege elevation methods. We would need to see the utilization of reverse code engineering (RCE) that modifies features in existing apps.
Currently, zero-day vulnerabilities (0-Days) are very expensive as they are found by hackers manually. Using 0-Days or having (expert-level) defenders know about them makes 0-Days quickly useless or worthless. Suppose we would see many more attacks with different 0-Days or RCE in combination with code-modifying attacks on the development of any countermeasure component; experts seeing the numbers and evidence could then call for an internal elevation to TL-2.
We assumed that we could find technical measures within TL-1 and TL-2 that are sufficient to protect the countermeasure deliverables; however, this might be a longer, iterative, and potentially competitive process in which we need to compare over a longer period results. Additionally, because of the heightened security warnings, developers will take security measures and processes more seriously, i.e., they will do many more code/system checks than they would otherwise.
Repeated checks might slow down the countermeasure’s deployment. Over time, protective solutions within developers’ environments will detect attacks (eventually). They will not contribute to additional vulnerabilities in solutions if we prepare to fix the underlying issues immediately and safely. Different experts’ intense scrutiny at every step will likely remove most problems at some point, but not necessarily within version 1 of the countermeasures. We hypothesize that version 1 will be redundant enough to facilitate covert change protection, limit damages, and refine operational experiences to make version 2 much safer.
However, operators behind the adversarial Hacker-AI could start directly threatening or harming key people within the development. Offline tools protecting developers should then be capable of gathering this evidence reliably. With evidence, we would then announce TL-2X internally; all people involved must be informed that malicious and personal attacks trying to prevent the development and completion of countermeasures are likely. How people are protected or protect themselves is beyond this paper’s scope, but professional advice and support are likely warranted. Operational plans to protect people and product development at TL-3 should be developed as soon as possible.
As soon as developers are forced to protect themselves, their families, and the physical integrity of used equipment or buildings (i.e., anything bad we could expect in TL-2X or TL-3/4), we must assume that the development, production, and deployment could be slowed down significantly.
Physically handing-over results from one developer to another must happen directly or with trusted (human) intermediaries because third-party logistics cannot be trusted. An additional post will discuss “Early Preparation for Cyberwar 2.0+”.
Delivering results under constant assaults of Hacker-AI and Cyberwar 2.0+ capabilities is a tall order. It would probably depend on the preparation quality ahead of Hacker-AI’s emergence/deployment if experts can deliver results, particularly by providing countermeasure hardware. I am skeptical and would be very surprised if defenders could protect the countermeasure development and its production if they start after a country has attacked another via a cyberwar 2.0+ (i.e., if they must start with the assumption of being in TF-3).
6. Conclusion
It is currently difficult to discuss concrete features or steps to protect the development of security tools from an adversary that could be hidden in all systems. It watches for vulnerabilities and opportunities to modify (generic) security measures covertly. It may also have tools/capabilities to hide these security-lowering modifications over a longer period. Unfortunately, this scenario is not unrealistic. Technology in our IT ecosystem is complex, and we have human operators with strong incentives to take full advantage of Hacker-AI permanently. Hacker-AI could continuously be improved. It could potentially go through an uncontrolled intelligence explosion, after which it is eventually much smarter than its human operators and independent of human instructions.
Additionally, we are dealing with many unknowns, and many iterations are required for defenders preparing for a threat of that magnitude. The sooner we develop hardware-based security for our IT devices, the easier we can produce, distribute and deploy improved security.
Security is an arms race. We may solve some problems if we are too late. But if we are (really) too late, we may never catch up. We may need to fight against the advanced tools of an adversary determined to take advantage of our vulnerabilities. In that situation, it is obvious: nothing will change the fact that we were too late.