I’ve definitely been wrong before. For example, in my article Types + Properties = Software, I wrote about type systems:
“To the far right, we have a hypothetical language with such a strong type system that, indeed, if it compiles, it works.”
To the right, in this context, means more statically typed. While the notion is natural, the sentence is uninformed. When I wrote the article, I hadn’t yet read Charles Petzold’s excellent Annotated Turing. Although I had heard about the halting problem before reading the book, I hadn’t internalised it. I wasn’t able to draw inferences based on that labelled concept.
After I read the book, I’ve come to understand that general-purpose static type system can never prove unequivocally that a generic program works. That’s what Church, Turing, and Gödel proved.
Now he is correct that:
A general-purpose static type system can never prove unequivocally that a generic program works.
In other words there will always be programs that work, but you can’t prove they work. A simple example would be:
This is a working program if RunSomeProgram runs forever and so the illegal operation is never reached, but the halting problem means that for any compiler there will always be values of ArbitraryProgram which run forever but the compiler doesn’t know this is the case.
But that’s not the original statement:
We have a hypothetical language with such a strong type system that, indeed, if it compiles, it works.”
This is trivially false. Consider for example AddLang. AddLang takes a comma separated list of integers as input, and prints the sum as the output. E.g 1,2,3 will output 6. It’s trivial to write a compiler for that language such that if it compiles, it works.
Of course that’s being a bit unfair. As is clear from the followup, he was talking about a general purpose typing system—i.e. a typing system for a Turing complete language, not a DSL.
For that though, we first need to define “works”. I think a reasonable definition of “works” is “works as the user intended it to”. Now a clever programmer could definitely write a program which works as the client intends it to if and only if the Goldbach conjecture is true, which would be an example of a program that might work, but wouldn’t compile. However that doesn’t much limit us—after all just write it in a way that doesn’t require solving the Goldbach conjecture. We’re more interested in whether there are classes of programs we would want to write, but such a type system just wouldn’t let us?
The first thing to realize is that if a programmer writes a program, he thinks it works. In other words he has an informal proof in his mind that the program displays some specific properties. Now there’s 2 possibilities:
a) He was wrong, and the program doesn’t actually work.
b) He was right, and the compiler should be able to validate that proof.
So for any reasonable program a human user would actually write, a compiler could theoretically validate that it does what the user wants it to do.
So the halting problem is kind of a red herring when it comes to the limits of type systems. Yes of course there are programs which work, but you can’t prove work. But the exact same applies to humans as well as computers, so the programs humans write tend to provably work.
So why don’t we have type systems where “if it compiles, it works”?
Firstly there’s been a huge amount of research into such languages—for example F*, Coq, Agda and Idris. They are pretty close to this level, but they aren’t perfect yet. If I were to identify the main challenges they would be:
Current in order to provide rigorous type checking dependently typed languages require proving whether or not a given function runs forever. That’s often not interesting—for example a web server definitely doesn’t halt, but displays lots of other behavior which we should be able to prove acts correctly. We need to focus on proving the properties that the user is interested in, rather than the ones the compiler is interested in.
Aesthetically how do you make it easy and pleasant for a user to give to the compiler an arbitrary proof that their program works for the compiler to validate it. Currently dependently typed languages avoid this problem by only allowing specific types of proof to be used (namely ones similar to a classic type system).
Writing formal proofs is hard. Writing a maths paper is hard enough for anyone without a lot of experience in it, and even then that’s nowhere near a formal proof. A computer would have a very hard time validating your average maths paper, and indeed a number of times the paper has been found to be incorrect on conversion to a formal, computer validatable proof. Making it easy for ordinary programmers to write arbitrary, formal, machine verifiable proofs would be an extremely difficult challenge.
The halting problem is overstated
Recently seen on an otherwise excellent blog post:
Now he is correct that:
In other words there will always be programs that work, but you can’t prove they work. A simple example would be:
This is a working program if RunSomeProgram runs forever and so the illegal operation is never reached, but the halting problem means that for any compiler there will always be values of
ArbitraryProgram
which run forever but the compiler doesn’t know this is the case.But that’s not the original statement:
This is trivially false. Consider for example AddLang. AddLang takes a comma separated list of integers as input, and prints the sum as the output. E.g
1,2,3
will output6
. It’s trivial to write a compiler for that language such that if it compiles, it works.Of course that’s being a bit unfair. As is clear from the followup, he was talking about a general purpose typing system—i.e. a typing system for a Turing complete language, not a DSL.
For that though, we first need to define “works”. I think a reasonable definition of “works” is “works as the user intended it to”. Now a clever programmer could definitely write a program which works as the client intends it to if and only if the Goldbach conjecture is true, which would be an example of a program that might work, but wouldn’t compile. However that doesn’t much limit us—after all just write it in a way that doesn’t require solving the Goldbach conjecture. We’re more interested in whether there are classes of programs we would want to write, but such a type system just wouldn’t let us?
The first thing to realize is that if a programmer writes a program, he thinks it works. In other words he has an informal proof in his mind that the program displays some specific properties. Now there’s 2 possibilities:
a) He was wrong, and the program doesn’t actually work.
b) He was right, and the compiler should be able to validate that proof.
So for any reasonable program a human user would actually write, a compiler could theoretically validate that it does what the user wants it to do.
So the halting problem is kind of a red herring when it comes to the limits of type systems. Yes of course there are programs which work, but you can’t prove work. But the exact same applies to humans as well as computers, so the programs humans write tend to provably work.
So why don’t we have type systems where “if it compiles, it works”?
Firstly there’s been a huge amount of research into such languages—for example F*, Coq, Agda and Idris. They are pretty close to this level, but they aren’t perfect yet. If I were to identify the main challenges they would be:
Current in order to provide rigorous type checking dependently typed languages require proving whether or not a given function runs forever. That’s often not interesting—for example a web server definitely doesn’t halt, but displays lots of other behavior which we should be able to prove acts correctly. We need to focus on proving the properties that the user is interested in, rather than the ones the compiler is interested in.
Aesthetically how do you make it easy and pleasant for a user to give to the compiler an arbitrary proof that their program works for the compiler to validate it. Currently dependently typed languages avoid this problem by only allowing specific types of proof to be used (namely ones similar to a classic type system).
Writing formal proofs is hard. Writing a maths paper is hard enough for anyone without a lot of experience in it, and even then that’s nowhere near a formal proof. A computer would have a very hard time validating your average maths paper, and indeed a number of times the paper has been found to be incorrect on conversion to a formal, computer validatable proof. Making it easy for ordinary programmers to write arbitrary, formal, machine verifiable proofs would be an extremely difficult challenge.