Erland Wittkoetter, Ph.D., is a physicist, mathematician, inventor, and entrepreneur. He is the founder of ASI-Safety-Lab, Inc and the starts currently the (open-source) developer community NoGoStar (No-Go-*). His main talent: looking for solutions outside the (current) box. If the laws of nature do not prevent (i.e., prohibit) us from having a technical solution for a problem, then there is likely (at least one) solution, even simple or retrofittable solutions. Finding these solutions may be difficult, but they are out there. We often need to accept new (unconventional) paradigms to see them.
Erland Wittkotter
Hacker-AI and Cyberwar 2.0+
Non-Technical Preparation for Hacker-AI and Cyberwar 2.0+
Safe Development of Hacker-AI Countermeasures – What if we are too late?
If we have Hacker-AI on a developer machine, we have a huge problem: Hacker-AI could sabotage basic security implementations via hidden backdoors or weak compilation tools. However, I am not giving up on seeking solutions to convince humans/experts that there are no hidden/late modifications or interferences from Hacker-AI. Trust must be earned; this could only come from constantly scrutinized source code and development tools.
The problem is “Trust”. Trust must be earned—Open source will help us to gain the trust of experts—I hope that this will carry some weight
If Hacker-AI (Type 2) would be on developer machines—then we have a huge problem—but I am not giving up on finding solutions for that situation (I was close to posting on that topic: i.e., on what can we do if Hacker-AI is sabotaging our security implementations) I am not giving up (yet)
Do we have a choice? We must develop security – otherwise, I don’t really want to think about in what world we could live if Hacker-AI is for real.
Yap, I checked out Mayhem a while ago—it seems to be an automated test case generation engine.
From what I read—they are probably not using Reverse Code Engineering (RCE) tools for their analysis; instead, they use the source of an app – still, this is pretty good. Their software found some vulnerabilities, but honestly, I am a bit disappointed they didn’t find more (because they are likely not going lower level). However, they deserve a lot of credit for being a pioneer in cyber-defense. The Challenge was 6 years ago—So, is Mayhem all we have? No further progress, no breakthroughs?My post was inspired by Deepmind’s “Reward is enough” and my (limited) experience doing RCE. (Deepmind’s approach is applicable, IMHO). So, I assume we will see more defense innovations developed (Offensive applications are probably not commercial). I assume that most happen behind closed doors. (Actually, I believe if I hear/read that the US has impressive cyberwar capabilities—and Mayhem is not impressive enough.)
Unfortunately, Hacker-AI will more likely be used as an attack tool; for using it in defense, results from Hacker-AI are too slowly deployable.
In my opinion, Hacker-AI forces us to make Cyber-defense proactive (not reactive). Also, we must think about preventing damage (or mitigating it quickly) -- and having redundancy in different security methods.
Why has Hacker-AI not been discussed more?
Well, we should start now …
In 2018, I was listening to the change in US Nuclear Posture—saying that US could retaliate nuclear on a cyber attack. At that time I thought that kind of counter-threat is way out of line—“disproportional”—A cyberwar via hacking would be an inconvenience—rebooting a few machines and being prepared to replace some harddrives. But now I understand better. I believe the security establishment knew much more about our vulnerabilities than what they were sharing with us. Why were they quiet?
4 years later—do they have a plan? Is nuclear deterrence the only tool that is keeping us safe? Leadership looks different to me.
I just read on BBC news: $10.5 Trillion annual damage by 2025 from cyber-crime.
Yes, we must have a technical solution for Hacker-AI asap.
You mention proactive, preventative steps in damage mitigation.
There are other methods:
deterrence though reliable investigation and criminal prosecution .. and making instigators pay
preventing that local events become global
Most important: accepting that this is a problem
What I don’t understand: Why are we quiet about this problem? It seems the people knowing about this problem (a long time) are not even dare to call for help. wow … how courageous.
Hacker-AI – Does it already exist?
I have posted “Improved Security to Prevent Hacker-AI and Digital Ghosts”—providing a technical solution for dealing with Hacker-AI and Digital Ghosts.
Improved Security to Prevent Hacker-AI and Digital Ghosts
Overestimating to which degree Hacker-AI could make itself undetectable? And do I potentially underestimate the effort of making a digital ghost undetectable for malware detection? I disagree because I have answered the following questions for myself.
(1) How conceptionally different are the various operating systems? In 1,000s of details, I believe they are different. But the concepts of how they are designed and written are similar among all multitasking/multithreading OS. Even multi-processing OS are built on similar but extended concepts.
(2) If we ask kernel developers: could they keep app(s) operational but disappear them? I believe he would have multiple ideas on how he could do that. Then we could ask what he could do to cover his tracks (i.e., that he made these changes to the kernel): Could they make changes they made to the OS disappear/undetectable for the OS? I believe yes. A detection tool run on a compromised system could be forced to jump into another memory area, in which these changes were not made, and then forced to jump back. Could a detector be deceived about that? Again, I believe: yes. These are instructions to the DMA (direct memory access) loading data from RAM into the different caches or processor kernels.
(3) Because OS and CPU were designed and optimized for improving performance, many low-level concepts are not done with simplicity in mind. How could a detector determine that a certain operation is due to resource optimization and which was done due to a ghost trying to make itself undetectable?
I think you are massively overestimating the ability of even a very strong narrow hacker AI to hide from literally everyone.
I seriously hope you are right, but from what I’ve learned, Reverse Code Engineering (RCE) is not done quickly or easily, but it’s a finite game. I know the goal; then, I believe you can define rules and train an AI. RCE is labor-intensive; AI could save me a lot of time. For an organization that hires many of the brightest IT minds, I m convinced they ask the right questions for the next iteration of Hacker-AI. I may overestimate how good Hacker-AI (already) is, but I believe you underestimate the motivation in organizations that could develop something like that. Personally, I believe, work on something like that started about 7 or 8 years ago (at least) - but I may be off by a few years (i.e. earlier).
Yes, Hacker-AI would need to hide from all detection setups – however, they are at most in the few K or 10K range (for all systems together) but not in the millions range. Additionally, there are a few shortcuts this Hacker-AI can take. One: make detectors “lie”—because hacker-AI has the assumed ability to modify malware detectors as well (Impossible? If a human can install/de-install it, so do a Hacker-AI). Also: operators do not know what is true; they accept data (usually) at face value. A ghost could run the detector app in a simulator is another scenario. And then there is the “blue screen of death”. Hacker-AI could trigger it before being discovered – and then who is being blamed for that? The malware detector app, of course …
Regarding military systems: I don’t know enough about them; what I read did not give me confidence that what they offer is sufficient – but I might have had a bias. From what I read, I assume: they have a main CPU (with von Neuman architecture), unified RAM, logical address space, and (a much more complex) access control system, all managed by a single OS—and yes: many standard tools are missing. Are the differences between commercial and military systems significant enough? How can I know? I am (simply) skeptical about claims like: “Optimized for Cyber-Defensibility” (sounds to me like marketing talk).
Indeed, I didn’t see that or forgot about it—I was pulling my memory when responding. So you might be right that Mayhem is doing RCE.
But what I remember distinctively from the DARPA challenge (also using my memory): their “hacking environment” was a simplified sandbox, not a full CISC with a complex OS.
In the “capture-the-flag” (CTF) hacking competition with humans (at DEFCON 2016), Mayhem was last. This was 6 years ago, we don’t know how good it is now (probably MUCH better, but still, it is a commercial tool used in defense).
I am more worried about the attack tools developed behind close doors. I read recently about Bruce Schneier (“When AI Becomes the Hacker”—May 14, 21):
I don’t share that optimism. Why? Complexity creates a combinatorial explosion (to an extremely large search space). What if vulnerabilities are like (bad) chess moves: could AI tell us all mistakes we could make (in advance)? I don’t want to misuse this analogy: but the question is: Is hacking a finite game (with a chance we can remove all vulnerabilities) - or is it an infinite game?