I see, it wasn’t my intention to misquote you, I was simply not aware of a significant distinction.
By the way, has the checking of validity of a proof of code correctness to be validated or proven as well or is there some agreed limit to the depth of verification sufficient to be sure that if you run the code nothing unexpected will happen?
The usual approach, taken in tools like Coq and HOL, is to reduce the proof checker to a small component of the overall proof assistant, with the hope that these small pieces can be reasonably understood by humans, and proved correct without machine assistance. This keeps the human correctness-checking burden low, while also keeping the piece that must be verified in a domain that’s reasonable for humans to work with, as opposed to, say, writing meta-meta-meta proofs.
The consequence of this approach is that, since the underlying logic is very simple, the resulting proof language is very low-level. Human users give higher-level specifications, and the bulk of the proof assistant compiles those into low-level proofs.
There are a couple of branches worth mentioning in the “meta-proof”. One is the proof of correctness of the proof-checking software. I believe that fiddlemath is addressing this problem. The “specification” of the proof-checking software is essentially a presentation of an “inference system”. The second is the proof of the soundness of the inference system with respect to the programming language and specification language. In principle, this is a task which is done once—when the languages are defined. The difficulty of this task is (AFAIK) roughly like that of a proof of cut-elimination—that is O(n^2) where n is the number of productions in the language grammars.
I see, it wasn’t my intention to misquote you, I was simply not aware of a significant distinction.
By the way, has the checking of validity of a proof of code correctness to be validated or proven as well or is there some agreed limit to the depth of verification sufficient to be sure that if you run the code nothing unexpected will happen?
The usual approach, taken in tools like Coq and HOL, is to reduce the proof checker to a small component of the overall proof assistant, with the hope that these small pieces can be reasonably understood by humans, and proved correct without machine assistance. This keeps the human correctness-checking burden low, while also keeping the piece that must be verified in a domain that’s reasonable for humans to work with, as opposed to, say, writing meta-meta-meta proofs.
The consequence of this approach is that, since the underlying logic is very simple, the resulting proof language is very low-level. Human users give higher-level specifications, and the bulk of the proof assistant compiles those into low-level proofs.
There are a couple of branches worth mentioning in the “meta-proof”. One is the proof of correctness of the proof-checking software. I believe that fiddlemath is addressing this problem. The “specification” of the proof-checking software is essentially a presentation of an “inference system”. The second is the proof of the soundness of the inference system with respect to the programming language and specification language. In principle, this is a task which is done once—when the languages are defined. The difficulty of this task is (AFAIK) roughly like that of a proof of cut-elimination—that is O(n^2) where n is the number of productions in the language grammars.