It’s impossible to prove that an arbitrary program, which someone else gave you, is correct. That’s halting-problem equivalent, or Rice’s theorem, etc.
Yes, we can prove various properties of programs we carefully write to be provable, but the context here is that a black-box executable Crowdstrike submits to Microsoft cannot be proven reliable by Microsoft.
There are definitely improvements we can make. Counting just the ones made in some other (bits of) operating systems, we could:
Rewrite in a memory-safe language like Rust
Move more stuff to userspace. Drivers for e.g. USB devices can and should be written in userspace, using something like libusb. This goes for every device that doesn’t need performance-critical code or to manage device-side DMA access, which still leaves a bunch of things, but it’s a start.
Sandbox more kinds of drivers in a recoverable way, so they can do the things they need to efficiently access hardware, but are still prevented from accessing the rest of the kernel and userspace, and can ‘crash safe’. For example, Windows can recover from crashes in graphics drivers specifically—which is an amazing accomplishment! Linux eBPF can’t access stuff it shouldn’t.
Expose more kernel features via APIs so people don’t have to write drivers to do stuff that isn’t literally driving a piece of hardware, so even if Crowdstrike has super-duper-permissions, a crash in Crowdstrike itself doesn’t bring down the rest of the system, it has to do it intentionally
Of course any such changes both cost a lot and take years or decades to become ubiquitous. Windows in particular has an incredible backwards compatibility story, which in practice means backwards compatibility with every past bug they ever had. But this is a really valuable feature for many users who have old apps and, yes, drivers that rely on those bugs!
It’s impossible to prove that an arbitrary program, which someone else gave you, is correct.
That is, of course, true. The chances that an arbitrary program is correct are very close to zero, and one can’t prove a false statement. So one should not even try. An arbitrary program someone gave you is almost certainly incorrect.
The standard methodology for formally correct software is joint development of a program and of a proof of its correctness. One starts from a specification, and refines it into a proof and a program in parallel.
One can’t write a program, and then try to prove its correctness as an afterthought. The goal of having a formally verified software needs to be present from the start, and then there are methods to accomplish the task of creating this kind of software jointly with a proof of its correctness (but these methods are currently very labor-expensive).
(And yes, perhaps, Windows environment is too messy to deal with formally. Although one would think that fire control for fleet missile defense would be fairly messy as well, yet people claimed that they created a verified Ada code for that back in the 1990-s (or, perhaps, late 1980-s, I am not sure). The numbers they quoted back then during a mid-1990-s talk were 500 thousand lines of Ada and 50 million dollars (would be way more expensive today).)
It would specifically be impossible to prove the Crowdstrike driver safe because, by necessity, it regularly loads new data provided by Crowdstrike threat intelligence, and changes its behavior based on those updates.
Even if you could get the CS driver to refuse to load new updates without proving certain attributes of those updates, you would also need some kind of assurance of the characteristics of every other necessary part of the Windows OS, in every future update.
No, let’s keep in mind the Aegis fire control for missile defense example.
This is a highly variable situation, the “enemy action” can come in many forms, from multiple directions at once, the weather can change rapidly, the fleet to defend might have a variety of compositions and spatial distributions, and so on. So one deals with a lot of variable and unpredictable factors. Yet, they were able to formally establish some properties of that software, presumably to satisfaction of their DoD customers.
It does not mean that they have a full-proof system, but the reliability is likely much better because of that effort at formal verification of software.
With Windows, who knows. Perhaps it is even more complex than that. But formal methods are often able to account for a wide range of external situations and data. For a number of reasons, they nevertheless don’t provide full guarantee (there is this trap of thinking, “formally verified ⇒ absolutely safe”, it’s important not to get caught into that trap; “formally verified” just means “much more reliable in practice”).
I was trying to address a general point of whether a provably correct software is possible (obviously yes, since it is actually practiced occasionally for some mission-critical systems). I don’t know if it makes sense to have that in the context of Windows kernels. From what people recently seem to say about Windows is that Microsoft is saying that the European regulator forced them not to refuse CrowdStrike-like updates (so much for thinking, “what could or should be done in a sane world”).
It’s impossible to prove that an arbitrary program, which someone else gave you, is correct. That’s halting-problem equivalent, or Rice’s theorem, etc.
Yes, we can prove various properties of programs we carefully write to be provable, but the context here is that a black-box executable Crowdstrike submits to Microsoft cannot be proven reliable by Microsoft.
There are definitely improvements we can make. Counting just the ones made in some other (bits of) operating systems, we could:
Rewrite in a memory-safe language like Rust
Move more stuff to userspace. Drivers for e.g. USB devices can and should be written in userspace, using something like libusb. This goes for every device that doesn’t need performance-critical code or to manage device-side DMA access, which still leaves a bunch of things, but it’s a start.
Sandbox more kinds of drivers in a recoverable way, so they can do the things they need to efficiently access hardware, but are still prevented from accessing the rest of the kernel and userspace, and can ‘crash safe’. For example, Windows can recover from crashes in graphics drivers specifically—which is an amazing accomplishment! Linux eBPF can’t access stuff it shouldn’t.
Expose more kernel features via APIs so people don’t have to write drivers to do stuff that isn’t literally driving a piece of hardware, so even if Crowdstrike has super-duper-permissions, a crash in Crowdstrike itself doesn’t bring down the rest of the system, it has to do it intentionally
Of course any such changes both cost a lot and take years or decades to become ubiquitous. Windows in particular has an incredible backwards compatibility story, which in practice means backwards compatibility with every past bug they ever had. But this is a really valuable feature for many users who have old apps and, yes, drivers that rely on those bugs!
That is, of course, true. The chances that an arbitrary program is correct are very close to zero, and one can’t prove a false statement. So one should not even try. An arbitrary program someone gave you is almost certainly incorrect.
The standard methodology for formally correct software is joint development of a program and of a proof of its correctness. One starts from a specification, and refines it into a proof and a program in parallel.
One can’t write a program, and then try to prove its correctness as an afterthought. The goal of having a formally verified software needs to be present from the start, and then there are methods to accomplish the task of creating this kind of software jointly with a proof of its correctness (but these methods are currently very labor-expensive).
(And yes, perhaps, Windows environment is too messy to deal with formally. Although one would think that fire control for fleet missile defense would be fairly messy as well, yet people claimed that they created a verified Ada code for that back in the 1990-s (or, perhaps, late 1980-s, I am not sure). The numbers they quoted back then during a mid-1990-s talk were 500 thousand lines of Ada and 50 million dollars (would be way more expensive today).)
It would specifically be impossible to prove the Crowdstrike driver safe because, by necessity, it regularly loads new data provided by Crowdstrike threat intelligence, and changes its behavior based on those updates.
Even if you could get the CS driver to refuse to load new updates without proving certain attributes of those updates, you would also need some kind of assurance of the characteristics of every other necessary part of the Windows OS, in every future update.
No, let’s keep in mind the Aegis fire control for missile defense example.
This is a highly variable situation, the “enemy action” can come in many forms, from multiple directions at once, the weather can change rapidly, the fleet to defend might have a variety of compositions and spatial distributions, and so on. So one deals with a lot of variable and unpredictable factors. Yet, they were able to formally establish some properties of that software, presumably to satisfaction of their DoD customers.
It does not mean that they have a full-proof system, but the reliability is likely much better because of that effort at formal verification of software.
With Windows, who knows. Perhaps it is even more complex than that. But formal methods are often able to account for a wide range of external situations and data. For a number of reasons, they nevertheless don’t provide full guarantee (there is this trap of thinking, “formally verified ⇒ absolutely safe”, it’s important not to get caught into that trap; “formally verified” just means “much more reliable in practice”).
I was trying to address a general point of whether a provably correct software is possible (obviously yes, since it is actually practiced occasionally for some mission-critical systems). I don’t know if it makes sense to have that in the context of Windows kernels. From what people recently seem to say about Windows is that Microsoft is saying that the European regulator forced them not to refuse CrowdStrike-like updates (so much for thinking, “what could or should be done in a sane world”).
Sure, that’s fair enough. I was thinking in the context of “formal verification that would have prevented this outage.”