I don’t know what you mean. Suppose you want a function A → Int, where A is some type. \x:A. 6, the function which takes x (of type A) and outputs 6, seems to do fine. To put it in c-like form, int six(x) {return 6}.
If programs are proofs, then general programming languages correspond to trivialism.
The type constructors that you’re thinking of are Arrow and Int. Forall is another type constructor, for constructing generic polymorphic types. Some types such as “Forall A, Forall B, A → B” are uninhabited. You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.
The type corresponding to a proposition like “all computable functions from the reals to the reals are continuous” looks like a function type consuming some representation of “a computable function” and producing some representation of “that function is continuous”. To represent that, you probably need dependent types—this would be a type constructor that takes an element, not a type, as a parameter. Because not all functions are continuous, the codomain representing “that function is continuous” isn’t in general inhabited. So building an element of that codomain is not necessarily trivial—and the process of doing so amounts to proving the original proposition.
What I’m trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.
You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.
...except in Haskell, where we can make functions for generating instances given a desired type. (Such as Magic Haskeller.) I still feel like it’s accurate to say that general-purpose languages correspond to trivialism.
Thanks, though! I didn’t think of the generic version. So, in general, inhabited types all act like “true” in the isomorphism. (We have A->B wherever B is a specific inhabited type.)
What I’m trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.
I’m fully aware of that, and I enjoy the correspondence. What bothers me is that people seem to go from a specific isomorphism (which has been extended greatly, granted) to a somewhat ill-defined general principle.
My frustration comes largely from hearing mistakes. One extreme example is thinking that (untyped!) lambda calculus is the universal logic (capturing first-order logic and any higher-order logic we might invent; after all, it’s Turing complete). Lesser cases are usually just ill-defined invocations (ie, guessing that the isomorphism might be relevant in cases where it’s not obvious how it would be).
Magic Haskeller and Augustsson’s Djinn are provers (or to say it another way, comprehensible as provers, or to say it another way, isomorphic to provers). They attempt to prove the proposition, and if they succeed they output the term corresponding (via the Curry-Howard Isomorphism) to the proof.
I believe they cannot output a term t :: a->b because there is no such term, because ‘anything implies anything else’ is false.
If programs are proofs, then general programming languages correspond to trivialism.
It is not that difficult to write general-purpose functions in total languages like Coq. You just provide the functions with “fuel” to “burn” during computation. You don’t have to express all features of interest in the type. But then you cannot rely on automation.
“Programs are proofs” slogan is interesting from yet another point of view. What are the theorems for which a complex program is the simplest proof? There can be multiple such “theorems” (“principal types” property doesn’t hold in complex settings).
Well yes, either direct input or wider context (i.e. environment, library). You can only construct values for tautology types with expressions not referring to context.
Actually, by discarding the input and then falling into an infinite loop (you cannot construct the output of the desired type without the input).
I don’t know what you mean. Suppose you want a function A → Int, where A is some type. \x:A. 6, the function which takes x (of type A) and outputs 6, seems to do fine. To put it in c-like form, int six(x) {return 6}.
If programs are proofs, then general programming languages correspond to trivialism.
Am I missing something?
The type constructors that you’re thinking of are Arrow and Int. Forall is another type constructor, for constructing generic polymorphic types. Some types such as “Forall A, Forall B, A → B” are uninhabited. You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.
The type corresponding to a proposition like “all computable functions from the reals to the reals are continuous” looks like a function type consuming some representation of “a computable function” and producing some representation of “that function is continuous”. To represent that, you probably need dependent types—this would be a type constructor that takes an element, not a type, as a parameter. Because not all functions are continuous, the codomain representing “that function is continuous” isn’t in general inhabited. So building an element of that codomain is not necessarily trivial—and the process of doing so amounts to proving the original proposition.
What I’m trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.
...except in Haskell, where we can make functions for generating instances given a desired type. (Such as Magic Haskeller.) I still feel like it’s accurate to say that general-purpose languages correspond to trivialism.
Thanks, though! I didn’t think of the generic version. So, in general, inhabited types all act like “true” in the isomorphism. (We have A->B wherever B is a specific inhabited type.)
I’m fully aware of that, and I enjoy the correspondence. What bothers me is that people seem to go from a specific isomorphism (which has been extended greatly, granted) to a somewhat ill-defined general principle.
My frustration comes largely from hearing mistakes. One extreme example is thinking that (untyped!) lambda calculus is the universal logic (capturing first-order logic and any higher-order logic we might invent; after all, it’s Turing complete). Lesser cases are usually just ill-defined invocations (ie, guessing that the isomorphism might be relevant in cases where it’s not obvious how it would be).
Magic Haskeller and Augustsson’s Djinn are provers (or to say it another way, comprehensible as provers, or to say it another way, isomorphic to provers). They attempt to prove the proposition, and if they succeed they output the term corresponding (via the Curry-Howard Isomorphism) to the proof.
I believe they cannot output a term t :: a->b because there is no such term, because ‘anything implies anything else’ is false.
The type on the right-hand side is usually something much more complicated and “creating an output of the desired type” is not trivial.
What is “trivialism”?
https://en.wikipedia.org/wiki/Trivialism I guess.
It is not that difficult to write general-purpose functions in total languages like Coq. You just provide the functions with “fuel” to “burn” during computation. You don’t have to express all features of interest in the type. But then you cannot rely on automation.
“Programs are proofs” slogan is interesting from yet another point of view. What are the theorems for which a complex program is the simplest proof? There can be multiple such “theorems” (“principal types” property doesn’t hold in complex settings).
Well yes, either direct input or wider context (i.e. environment, library). You can only construct values for tautology types with expressions not referring to context.