it doesn’t mean the world is safe; I think you are quite right that this weapon can be constructed. However, I don’t think it would be the same kind of catastrophic invisible-until-no-response-is-possible failure that a strongly superhuman planning-self-coherent-and-causal UFAI could use to take over the internet, and I don’t think a human creator could get away clean from trying to take over the world using it. the military systems are probably vulnerable, but not so vulnerable that it could completely evade detection, unless targeted specifically—effectively at that point you’re describing a stuxnet-like attack. it’s a real thing, but I think you’re overestimating maximum cloakability slightly. I do not intend to reassure.
Overestimating to which degree Hacker-AI could make itself undetectable? And do I potentially underestimate the effort of making a digital ghost undetectable for malware detection? I disagree because I have answered the following questions for myself.
(1) How conceptionally different are the various operating systems? In 1,000s of details, I believe they are different. But the concepts of how they are designed and written are similar among all multitasking/multithreading OS. Even multi-processing OS are built on similar but extended concepts.
(2) If we ask kernel developers: could they keep app(s) operational but disappear them? I believe he would have multiple ideas on how he could do that. Then we could ask what he could do to cover his tracks (i.e., that he made these changes to the kernel): Could they make changes they made to the OS disappear/undetectable for the OS? I believe yes. A detection tool run on a compromised system could be forced to jump into another memory area, in which these changes were not made, and then forced to jump back. Could a detector be deceived about that? Again, I believe: yes. These are instructions to the DMA (direct memory access) loading data from RAM into the different caches or processor kernels.
(3) Because OS and CPU were designed and optimized for improving performance, many low-level concepts are not done with simplicity in mind. How could a detector determine that a certain operation is due to resource optimization and which was done due to a ghost trying to make itself undetectable?
I don’t think this is productive because I don’t actually disagree with your core point as much as my reply seems to have made it appear. all three points are quite solid and I have no serious criticism of your questions; I’m not comfortable answering questions like these, though. Perhaps someone else in the security research community can comment.
it doesn’t mean the world is safe; I think you are quite right that this weapon can be constructed. However, I don’t think it would be the same kind of catastrophic invisible-until-no-response-is-possible failure that a strongly superhuman planning-self-coherent-and-causal UFAI could use to take over the internet, and I don’t think a human creator could get away clean from trying to take over the world using it. the military systems are probably vulnerable, but not so vulnerable that it could completely evade detection, unless targeted specifically—effectively at that point you’re describing a stuxnet-like attack. it’s a real thing, but I think you’re overestimating maximum cloakability slightly. I do not intend to reassure.
Overestimating to which degree Hacker-AI could make itself undetectable? And do I potentially underestimate the effort of making a digital ghost undetectable for malware detection? I disagree because I have answered the following questions for myself.
(1) How conceptionally different are the various operating systems? In 1,000s of details, I believe they are different. But the concepts of how they are designed and written are similar among all multitasking/multithreading OS. Even multi-processing OS are built on similar but extended concepts.
(2) If we ask kernel developers: could they keep app(s) operational but disappear them? I believe he would have multiple ideas on how he could do that. Then we could ask what he could do to cover his tracks (i.e., that he made these changes to the kernel): Could they make changes they made to the OS disappear/undetectable for the OS? I believe yes. A detection tool run on a compromised system could be forced to jump into another memory area, in which these changes were not made, and then forced to jump back. Could a detector be deceived about that? Again, I believe: yes. These are instructions to the DMA (direct memory access) loading data from RAM into the different caches or processor kernels.
(3) Because OS and CPU were designed and optimized for improving performance, many low-level concepts are not done with simplicity in mind. How could a detector determine that a certain operation is due to resource optimization and which was done due to a ghost trying to make itself undetectable?
I don’t think this is productive because I don’t actually disagree with your core point as much as my reply seems to have made it appear. all three points are quite solid and I have no serious criticism of your questions; I’m not comfortable answering questions like these, though. Perhaps someone else in the security research community can comment.