Colloquially, Godel’s theorem says that for any formal system complex enough to implement a certain subset of mathematics there will be syntactically valid theorems that the system cannot evaluate. He shows this by demonstrating a particular technique that can look at any formal system and build a statement that breaks it. Hence, any proof checker will have statements it can’t handle.
The relevance to computer security comes from the realization that computer programs are formal systems, and large ones are often complex enough to exhibit this behavior whether they explicitly implement general-purpose mathematics or not. Programming languages, natural language processing systems, simulation programs and just about any kind of AI all tend to be powerful enough to be vulnerable, and of course lots of otherwise boring software has one of those things embedded in it. When such a system encounters a statement it can’t evaluate it might ignore it, or hang, or crash, or break the stack and execute your malware as root – there’s no way to tell without specifically looking at (and testing) each different class of problematic statement the system could encounter.
Now, some people concede that much, but insist that it doesn’t matter because Godel statements are just an oddity they can hard-code their systems to ignore. But Godel also showed that any attempt to patch a formal system to resist its Godel statement simply creates a more complex system of (original formal system + patch) that has its own Godel statement. Also, it’s important to realize that Godel was a mathematician, not a hacker, and he was only interesting in showing that formal systems above a certain complexity bound can’t be both complete and consistent. When you start looking into the history of this little branch of mathematics you quickly discover than the more complex a system is the more ways there are to break it, and for something on the level we’re talking about the number of failure modes is so vast that an army of mathematicians (or hackers) could spend millennia tossing out examples without even scratching the surface.
So, if you could enumerate all the inputs that a particular program can’t safely evaluate you could make it just ignore them, at the cost of throwing out some statements that resemble problematic ones but would actually have been safe. But when you try to do this you discover that there is an endless array of different types of exploit, and whenever someone goes looking for new ones they find some. So if you want to expose something as complex as a theorem-prover to statements generated by hostile parties, you have to expect that even if you fix every vulnerability you know about they’ll find new ones on a regular basis.
Hence, any proof checker will have statements it can’t handle.
There are many things wrong with many of your statements here, but this one is particularly simple. You simply misunderstand what a proof checker does.
A proof checker does not deal with statements, it deals with formal proofs. Proof checking is roughly as complicated as parsing a context free language. That is, it is almost a trivial task.
Proof finding is another matter. A proof finder (aka a “theorem prover”) takes a statement as input and attempts to produce a proof as output. Godel shows that there are statements that a proof finder can’t handle.
You have repeatedly made claims which only make sense if you have the two kinds of programs confused. Fix that confusion, please.
He shows this by demonstrating a particular technique that can look at any formal system and build a statement that breaks it. Hence, any proof checker will have statements it can’t handle.
He shows that for any given proof system, there exist statements for which no proof exists and no disproof exists either. That is not the same as breaking the proof-checker, which only cares about the particular proof it’s given, not about whether other proofs or disproofs exist.
Proof generation is inherently complex. Proof verification is nearly trivial, and very well understood.
Colloquially, Godel’s theorem says that for any formal system complex enough to implement a certain subset of mathematics there will be syntactically valid theorems that the system cannot evaluate. He shows this by demonstrating a particular technique that can look at any formal system and build a statement that breaks it. Hence, any proof checker will have statements it can’t handle.
There’s a lot to object to there, but let’s be generous about “colloquially.” Here is a more precise and correct rewording of your last line: “Any algorithm that takes a statement as input and searches for a proof or disproof will fail on some inputs.” Here is a more precise and incorrect rewording: “There is no algorithm that takes a proof as input and evaluates the correctness or incorrectness of that proof.”
Now, some people concede that much, but insist that it doesn’t matter because Godel statements are just an oddity they can hard-code their systems to ignore.
I dare you to find someone actually claiming that (preferably someone making the claim under his real name, as a low barrier for eliminating clueless idiots on forums and comment threads.)
When such a system encounters a statement it can’t evaluate it might ignore it, or hang, or crash, or break the stack and execute your malware as root –there’s no way to tell without specifically looking at (and testing) each different class of problematic statement the system could encounter.
Actually, type theory is poweful enough to prove that programs will never do any of these things, for all statements at once. People rarely bother to prove that their programs are correct, but only because it’s not usually worth the effort.
I often see formal verification enthusiasts make such claims, but as far as I can tell that’s like saying AIXI solves the problem of AI design. If you could do this kind of thing on real programs under realistic conditions we’d all be using formal proof tools instead of testing departments. Instead large companies spend billions of dollars a year on testing we all know is inadequate, largely because efforts to actually apply formal techniques to these systems have failed.
These real-world programs that no one is willing to use formal proof tools on are 2-5 orders of magnitude more complicated than the hypothetical proof checker under consideration would be.
Futile brute-force testing is preferred because so many of the people involved are unwilling or unable to adequately formalize their actual requirements.
It is technically true that the people involved are unable to adequately formalize their actual requirements, but that is because precisely formalizing their requirements is a problem much harder than human mathematicians or computer scientists have ever solved. All humans would be “unwilling or unable to adequately formalize their actual requirements.” Notice, for example, that mathematicians can’t precisely formalize modern mathematics in the required sense either, and that problem is still much easier than formal verification for the sorts of things modern commercial projects are trying to accomplish.
This intuitively seems wrong, but without high confidence. Link to a proof, even a highly informal one, or an informal explanation from yourself?
Colloquially, Godel’s theorem says that for any formal system complex enough to implement a certain subset of mathematics there will be syntactically valid theorems that the system cannot evaluate. He shows this by demonstrating a particular technique that can look at any formal system and build a statement that breaks it. Hence, any proof checker will have statements it can’t handle.
The relevance to computer security comes from the realization that computer programs are formal systems, and large ones are often complex enough to exhibit this behavior whether they explicitly implement general-purpose mathematics or not. Programming languages, natural language processing systems, simulation programs and just about any kind of AI all tend to be powerful enough to be vulnerable, and of course lots of otherwise boring software has one of those things embedded in it. When such a system encounters a statement it can’t evaluate it might ignore it, or hang, or crash, or break the stack and execute your malware as root – there’s no way to tell without specifically looking at (and testing) each different class of problematic statement the system could encounter.
Now, some people concede that much, but insist that it doesn’t matter because Godel statements are just an oddity they can hard-code their systems to ignore. But Godel also showed that any attempt to patch a formal system to resist its Godel statement simply creates a more complex system of (original formal system + patch) that has its own Godel statement. Also, it’s important to realize that Godel was a mathematician, not a hacker, and he was only interesting in showing that formal systems above a certain complexity bound can’t be both complete and consistent. When you start looking into the history of this little branch of mathematics you quickly discover than the more complex a system is the more ways there are to break it, and for something on the level we’re talking about the number of failure modes is so vast that an army of mathematicians (or hackers) could spend millennia tossing out examples without even scratching the surface.
So, if you could enumerate all the inputs that a particular program can’t safely evaluate you could make it just ignore them, at the cost of throwing out some statements that resemble problematic ones but would actually have been safe. But when you try to do this you discover that there is an endless array of different types of exploit, and whenever someone goes looking for new ones they find some. So if you want to expose something as complex as a theorem-prover to statements generated by hostile parties, you have to expect that even if you fix every vulnerability you know about they’ll find new ones on a regular basis.
There are many things wrong with many of your statements here, but this one is particularly simple. You simply misunderstand what a proof checker does.
A proof checker does not deal with statements, it deals with formal proofs. Proof checking is roughly as complicated as parsing a context free language. That is, it is almost a trivial task.
Proof finding is another matter. A proof finder (aka a “theorem prover”) takes a statement as input and attempts to produce a proof as output. Godel shows that there are statements that a proof finder can’t handle.
You have repeatedly made claims which only make sense if you have the two kinds of programs confused. Fix that confusion, please.
He shows that for any given proof system, there exist statements for which no proof exists and no disproof exists either. That is not the same as breaking the proof-checker, which only cares about the particular proof it’s given, not about whether other proofs or disproofs exist.
Proof generation is inherently complex. Proof verification is nearly trivial, and very well understood.
There’s a lot to object to there, but let’s be generous about “colloquially.” Here is a more precise and correct rewording of your last line: “Any algorithm that takes a statement as input and searches for a proof or disproof will fail on some inputs.” Here is a more precise and incorrect rewording: “There is no algorithm that takes a proof as input and evaluates the correctness or incorrectness of that proof.”
I dare you to find someone actually claiming that (preferably someone making the claim under his real name, as a low barrier for eliminating clueless idiots on forums and comment threads.)
Actually, type theory is poweful enough to prove that programs will never do any of these things, for all statements at once. People rarely bother to prove that their programs are correct, but only because it’s not usually worth the effort.
I often see formal verification enthusiasts make such claims, but as far as I can tell that’s like saying AIXI solves the problem of AI design. If you could do this kind of thing on real programs under realistic conditions we’d all be using formal proof tools instead of testing departments. Instead large companies spend billions of dollars a year on testing we all know is inadequate, largely because efforts to actually apply formal techniques to these systems have failed.
These real-world programs that no one is willing to use formal proof tools on are 2-5 orders of magnitude more complicated than the hypothetical proof checker under consideration would be.
Futile brute-force testing is preferred because so many of the people involved are unwilling or unable to adequately formalize their actual requirements.
It is technically true that the people involved are unable to adequately formalize their actual requirements, but that is because precisely formalizing their requirements is a problem much harder than human mathematicians or computer scientists have ever solved. All humans would be “unwilling or unable to adequately formalize their actual requirements.” Notice, for example, that mathematicians can’t precisely formalize modern mathematics in the required sense either, and that problem is still much easier than formal verification for the sorts of things modern commercial projects are trying to accomplish.