Suggestion: if your goal is AI, maybe abandon logic and set theory in favor of type theory. It seems better suited to computers anyway thanks to computational trinitarianism. The problem of pinning down models uniquely seems to be solved by something called the univalence axiom, which I currently don’t understand at all.
The univalence axiom has nothing to do with pinning down models (except perhaps that if you are talking about models internally, it says that you’re pinning them down up to isomorphism rather than up to some notion of syntactic equality). Non-standard models are essential for getting an intuition about what is and isn’t provable; most of the arguments that I’ve seen for unexpectedly interesting terms involve picking a non-standard model, finding a point in that model with unexpected properties (e.g., something that is false in the standard model), and then construct that point. See, for example, this post, which constructs a nonstandard point on the circle, one for which you can’t exhibit a path from it to the basepoint of the circle. More generally, see The Elements of an Inductive Type for a good explanation of why non-standard models are inescapable.
I guess that understanding univalence axiom would be helped by understanding the implicit equality axioms.
Univalence axiom states that two isomorphic types are equal; this means that if type A is isomorphic to type B, and type C contains A, C has to contain B (and a few similar requirements).
Requiring that two types are equal if they are isomorphic means prohibiting anything that we can write to distinguish them (i.e. not to handle them equivalently).
Just so you know… I want to upvote you for linking to that awesome type theory article (although JeremyHahn posted it to this discussion already), but I also want to vote you down for computational trinitarianism (propositions are not types! arggggg!). ;)
I’m generally not on board with the whole “programs are proofs” dogma. When I look into the details, all it says is that a successful type-check of a program gives a proof of the existence of a program of that type. In expressively powerful languages, this seems very uninteresting; so far as I know, you can easily construct a program of any type by just discarding the input and then creating an output of the desired type. (Handling the empty type is a little tricky, but I wonder if infinite loops or program failures would do.) So my understanding is that you only get the interesting type structure (where it isn’t trivial to prove the existence of a type) by restricting the language. (Typed lambda calculus is not Turing-complete.)
So, while I see the curry-howard stuff as beautiful math, I don’t think it justifies the ‘programs as proofs’ mantra that many have taken up! I don’t see the appeal.
I think Harper (who coined the phrase “computational trinitarianism”) doesn’t claim that “programs are proofs”, but rather that proofs are programs. That is, proofs should be written down in typed calculi and (to be intuinistically acceptable) should have computational content. But programming languages and type systems can be used for more things than just proving theorems.
When I look into the details, all it says is that a successful type-check of a program gives a proof of the existence of a program of that type.
If course this isn’t quite true. Other interesting things which are said include the idea that a proof of a conjunction can be handled as a pair of proofs (one for each element of the conjunction), and a proof for a disjunction can be a proof for either side, and so on; so in a sufficiently rich type system, we can leverage the type machinery to do logic, using product types as conjunctions, sum types as disjunctions, and so on. A proposition is then modeled as a type for a proof. An implication is a function type, proven by a procedure which transforms a proof of one thing into a proof for another.
Still, I can’t swallow “programs are proofs / propositions are types”.
I don’t know what you mean. Suppose you want a function A → Int, where A is some type. \x:A. 6, the function which takes x (of type A) and outputs 6, seems to do fine. To put it in c-like form, int six(x) {return 6}.
If programs are proofs, then general programming languages correspond to trivialism.
The type constructors that you’re thinking of are Arrow and Int. Forall is another type constructor, for constructing generic polymorphic types. Some types such as “Forall A, Forall B, A → B” are uninhabited. You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.
The type corresponding to a proposition like “all computable functions from the reals to the reals are continuous” looks like a function type consuming some representation of “a computable function” and producing some representation of “that function is continuous”. To represent that, you probably need dependent types—this would be a type constructor that takes an element, not a type, as a parameter. Because not all functions are continuous, the codomain representing “that function is continuous” isn’t in general inhabited. So building an element of that codomain is not necessarily trivial—and the process of doing so amounts to proving the original proposition.
What I’m trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.
You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.
...except in Haskell, where we can make functions for generating instances given a desired type. (Such as Magic Haskeller.) I still feel like it’s accurate to say that general-purpose languages correspond to trivialism.
Thanks, though! I didn’t think of the generic version. So, in general, inhabited types all act like “true” in the isomorphism. (We have A->B wherever B is a specific inhabited type.)
What I’m trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.
I’m fully aware of that, and I enjoy the correspondence. What bothers me is that people seem to go from a specific isomorphism (which has been extended greatly, granted) to a somewhat ill-defined general principle.
My frustration comes largely from hearing mistakes. One extreme example is thinking that (untyped!) lambda calculus is the universal logic (capturing first-order logic and any higher-order logic we might invent; after all, it’s Turing complete). Lesser cases are usually just ill-defined invocations (ie, guessing that the isomorphism might be relevant in cases where it’s not obvious how it would be).
Magic Haskeller and Augustsson’s Djinn are provers (or to say it another way, comprehensible as provers, or to say it another way, isomorphic to provers). They attempt to prove the proposition, and if they succeed they output the term corresponding (via the Curry-Howard Isomorphism) to the proof.
I believe they cannot output a term t :: a->b because there is no such term, because ‘anything implies anything else’ is false.
If programs are proofs, then general programming languages correspond to trivialism.
It is not that difficult to write general-purpose functions in total languages like Coq. You just provide the functions with “fuel” to “burn” during computation. You don’t have to express all features of interest in the type. But then you cannot rely on automation.
“Programs are proofs” slogan is interesting from yet another point of view. What are the theorems for which a complex program is the simplest proof? There can be multiple such “theorems” (“principal types” property doesn’t hold in complex settings).
Well yes, either direct input or wider context (i.e. environment, library). You can only construct values for tautology types with expressions not referring to context.
Suggestion: if your goal is AI, maybe abandon logic and set theory in favor of type theory. It seems better suited to computers anyway thanks to computational trinitarianism. The problem of pinning down models uniquely seems to be solved by something called the univalence axiom, which I currently don’t understand at all.
The univalence axiom has nothing to do with pinning down models (except perhaps that if you are talking about models internally, it says that you’re pinning them down up to isomorphism rather than up to some notion of syntactic equality). Non-standard models are essential for getting an intuition about what is and isn’t provable; most of the arguments that I’ve seen for unexpectedly interesting terms involve picking a non-standard model, finding a point in that model with unexpected properties (e.g., something that is false in the standard model), and then construct that point. See, for example, this post, which constructs a nonstandard point on the circle, one for which you can’t exhibit a path from it to the basepoint of the circle. More generally, see The Elements of an Inductive Type for a good explanation of why non-standard models are inescapable.
I guess that understanding univalence axiom would be helped by understanding the implicit equality axioms.
Univalence axiom states that two isomorphic types are equal; this means that if type A is isomorphic to type B, and type C contains A, C has to contain B (and a few similar requirements).
Requiring that two types are equal if they are isomorphic means prohibiting anything that we can write to distinguish them (i.e. not to handle them equivalently).
Just so you know… I want to upvote you for linking to that awesome type theory article (although JeremyHahn posted it to this discussion already), but I also want to vote you down for computational trinitarianism (propositions are not types! arggggg!). ;)
I’m generally not on board with the whole “programs are proofs” dogma. When I look into the details, all it says is that a successful type-check of a program gives a proof of the existence of a program of that type. In expressively powerful languages, this seems very uninteresting; so far as I know, you can easily construct a program of any type by just discarding the input and then creating an output of the desired type. (Handling the empty type is a little tricky, but I wonder if infinite loops or program failures would do.) So my understanding is that you only get the interesting type structure (where it isn’t trivial to prove the existence of a type) by restricting the language. (Typed lambda calculus is not Turing-complete.)
So, while I see the curry-howard stuff as beautiful math, I don’t think it justifies the ‘programs as proofs’ mantra that many have taken up! I don’t see the appeal.
I think Harper (who coined the phrase “computational trinitarianism”) doesn’t claim that “programs are proofs”, but rather that proofs are programs. That is, proofs should be written down in typed calculi and (to be intuinistically acceptable) should have computational content. But programming languages and type systems can be used for more things than just proving theorems.
If course this isn’t quite true. Other interesting things which are said include the idea that a proof of a conjunction can be handled as a pair of proofs (one for each element of the conjunction), and a proof for a disjunction can be a proof for either side, and so on; so in a sufficiently rich type system, we can leverage the type machinery to do logic, using product types as conjunctions, sum types as disjunctions, and so on. A proposition is then modeled as a type for a proof. An implication is a function type, proven by a procedure which transforms a proof of one thing into a proof for another.
Still, I can’t swallow “programs are proofs / propositions are types”.
Actually, by discarding the input and then falling into an infinite loop (you cannot construct the output of the desired type without the input).
I don’t know what you mean. Suppose you want a function A → Int, where A is some type. \x:A. 6, the function which takes x (of type A) and outputs 6, seems to do fine. To put it in c-like form, int six(x) {return 6}.
If programs are proofs, then general programming languages correspond to trivialism.
Am I missing something?
The type constructors that you’re thinking of are Arrow and Int. Forall is another type constructor, for constructing generic polymorphic types. Some types such as “Forall A, Forall B, A → B” are uninhabited. You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.
The type corresponding to a proposition like “all computable functions from the reals to the reals are continuous” looks like a function type consuming some representation of “a computable function” and producing some representation of “that function is continuous”. To represent that, you probably need dependent types—this would be a type constructor that takes an element, not a type, as a parameter. Because not all functions are continuous, the codomain representing “that function is continuous” isn’t in general inhabited. So building an element of that codomain is not necessarily trivial—and the process of doing so amounts to proving the original proposition.
What I’m trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.
...except in Haskell, where we can make functions for generating instances given a desired type. (Such as Magic Haskeller.) I still feel like it’s accurate to say that general-purpose languages correspond to trivialism.
Thanks, though! I didn’t think of the generic version. So, in general, inhabited types all act like “true” in the isomorphism. (We have A->B wherever B is a specific inhabited type.)
I’m fully aware of that, and I enjoy the correspondence. What bothers me is that people seem to go from a specific isomorphism (which has been extended greatly, granted) to a somewhat ill-defined general principle.
My frustration comes largely from hearing mistakes. One extreme example is thinking that (untyped!) lambda calculus is the universal logic (capturing first-order logic and any higher-order logic we might invent; after all, it’s Turing complete). Lesser cases are usually just ill-defined invocations (ie, guessing that the isomorphism might be relevant in cases where it’s not obvious how it would be).
Magic Haskeller and Augustsson’s Djinn are provers (or to say it another way, comprehensible as provers, or to say it another way, isomorphic to provers). They attempt to prove the proposition, and if they succeed they output the term corresponding (via the Curry-Howard Isomorphism) to the proof.
I believe they cannot output a term t :: a->b because there is no such term, because ‘anything implies anything else’ is false.
The type on the right-hand side is usually something much more complicated and “creating an output of the desired type” is not trivial.
What is “trivialism”?
https://en.wikipedia.org/wiki/Trivialism I guess.
It is not that difficult to write general-purpose functions in total languages like Coq. You just provide the functions with “fuel” to “burn” during computation. You don’t have to express all features of interest in the type. But then you cannot rely on automation.
“Programs are proofs” slogan is interesting from yet another point of view. What are the theorems for which a complex program is the simplest proof? There can be multiple such “theorems” (“principal types” property doesn’t hold in complex settings).
Well yes, either direct input or wider context (i.e. environment, library). You can only construct values for tautology types with expressions not referring to context.