And, since traditional programming lets you write functions, it sounds like functional programming is just a restriction on what you can do. In which case, I can write functional code in Java.
Java doesn’t have tail call optimization, lazy evaluation, type inference, or the right kind of parametric polymorphism. Also Java compilers and runtimes cannot rely on the absence of side effects. All this stuff is important. If you try to translate an idiomatic Haskell program to Java, you’ll be in a world of pain. See Joe Duffy’s STM retrospective for a very detailed account of how a team of brilliant people at Microsoft tried and failed to translate a popular Haskell library into C#.
This is the stuff that confuses me. What is functional programming? Is it defined as things that you can do when you program with functions, or is it a set of properties that go together for historic reasons? Why does programming using functions correlate with all these things?
I understand that lazy evaluation works best with functional programming. But I get type inference in Perl, and parametric polymorphism in Java, so they can’t rely on functional programming.
Perl doesn’t have type inference. It has dynamic typing, but that doesn’t help with the typical use case of type inference: you write terse code without types, and the typechecker tells you what’s wrong with it without having to run it. Functional programmers often expect their programs to work correctly on the first run. Here’s some slides showing how the typechecker can detect an infinite loop bug. They’re taken from a talk by Mark-Jason Dominus, a well-known Perl programmer.
Java’s implementation of parametric polymorphism is broken. Neal Gafter’s proposal for reified generics describes the problems nicely:
For a type parameter T, you can’t write a class literal T.class. You can’t use instanceof to test if an object is of type T. And you can’t create an array of T. But it gets worse. You also can’t write class literals for generic types like List.class, or test if an object is an instanceof List, or create an array of List.
Asking for a definition of functional programming is asking for a flame war over words :-) Here’s my best attempt at an explanation: when people tried to program with pure functions, they discovered a whole world of new ideas that fit together in wonderful ways. It may not be obvious when you’re looking from the outside in, thinking that “it’s just programming with functions”. For example, Chris Okasaki’s red-black trees in Haskell take only 5 short lines of code to implement the rebalancing operation, because the supporting features of Haskell turned out to be a very good fit for Okasaki’s research on data structures.
Perl doesn’t have type inference. It has dynamic typing, but that doesn’t help with the typical use case of type inference: you write terse code without types, and the typechecker tells you what’s wrong with it without having to run it.
If I write,
$x = “3”; $y = $x + 2;
then Perl infers that $x is an integer. Why is that not type inference?
“Type inference” is a technical term. You can’t figure out its meaning by separately examining the words “type” and “inference” and then pasting those meanings together into something that vaguely makes sense. That would be like thinking that “real numbers” are the sort of numbers that are more real and touchy-feely than other numbers, and then arguing that −1 is obviously not “real”. In short, you have to read up. A good starting point is “Hindley-Milner type inference” on Wikipedia.
I already read that article before making my comment. It says, “Type inference refers to the ability to deduce automatically the type of a value in a programming language.” And Perl does that. Perl therefore does type inference.
Perl doesn’t use the Hindley-Milner algorithm, which that article says was “the algorithm first used to perform type inference”. Not “the only way to do type inference.”
More importantly, my question about type inference is a child of my question about why this is called a property of functional programming languages. A language doesn’t have to be functional to do type inference, and a functional language doesn’t have to do type inference. LISP doesn’t do type inference. So why do people bring up type inference when defining functional programming?
People don’t say Perl does type inference because Perl doesn’t do type inference in the technical sense of the term. People have other technical terms for what Perl is doing, e.g. “implicit type conversion”. In your example, Perl does not infer that “3″ belongs to the type of integers, because that is false. First off, Perl doesn’t subdivide scalars into types like integers and strings. Secondly, if it did have separate types for integers and strings, then “3” would belong to the type of strings, which would have a different memory representation than integers. What Perl does is insert a silent conversion operation at runtime, which can sometimes give weird results, e.g. in your example it would happily treat “a” as the integer zero. (Type inference, my ass!) Similarly, there could be a conversion operation from strings like “$150” to values of type Money, but that doesn’t mean you should be talking about “type inference” to say such strings “are” values of type Money.
People bring up type inference when discussing functional programming because Haskell and ML use variants of the Hindley-Milner algorithm. I don’t want to talk about “defining” functional programming, or discuss whether some language should or shouldn’t be labeled “functional”, because such questions are ultimately about words and therefore useless.
Perl doesn’t use the Hindley-Milner algorithm, which that article says was “the algorithm first used to perform type inference”. Not “the only way to do type inference.”
Indeed, but what Perl does still isn’t type inference; type inference is something done at compile time, and doesn’t really make sense outside of a static type system. The first-sentence summary of a Wikipedia article should not be taken as a definition.
But then what /should/ be taken as a definition?
I’m saying, “Saying that ‘type inference’ means ‘type inference’ makes sense; and Wikipedia agrees.” You say no. Citation needed.
Here’s an example of what I call type inference in Perl. It happens at execution, not at compile time. It’s also an example of why uncontrolled inference of types is bad:
sub complement {
my ($string) = @_;
$string =~ tr/acgt/tgca/;
return $string;
}
EDIT: Your comment has been substantially edited so looks like I’ll have to do the same for mine.
“tttggg\n”, since Perl “reverse” is for reversing lists, right? But it’s not clear what that has to do with type inference. You seem to once again be using “type inference” to mean “implicit type conversion”. There’s no need to call this “type inference” because it already had a name. If Perl used type inference—and did not implicitly convert between scalars and lists of one scalar—the program would never even start, because it would contain a type error. Again, the notion of “type inference” only applies in a static typing system, so it’s not really applicable to Perl.
If you want an actual definition, then here is my attempt: “Type inference” refers to compile-time inference of types so that type errors can be detected.
Since you work in AI, you have probably heard of the unification algorithm (of Prolog fame). Well, when a “languages” geek says “type inference” they mean something whose implementation involves the unification algorithm (among other things). (Note that most programmers using a compiler that does type inference probably do not know that it involves the unification algorithm. In other words, type inference is usually explained without making reference to unification.)
The same “languages” geek would refer to what is happening in your example as “coercion”. In particular, a string is being coerced to an integer.
Java doesn’t have tail call optimization, lazy evaluation, type inference, or the right kind of parametric polymorphism. Also Java compilers and runtimes cannot rely on the absence of side effects. All this stuff is important. If you try to translate an idiomatic Haskell program to Java, you’ll be in a world of pain. See Joe Duffy’s STM retrospective for a very detailed account of how a team of brilliant people at Microsoft tried and failed to translate a popular Haskell library into C#.
This is the stuff that confuses me. What is functional programming? Is it defined as things that you can do when you program with functions, or is it a set of properties that go together for historic reasons? Why does programming using functions correlate with all these things?
I understand that lazy evaluation works best with functional programming. But I get type inference in Perl, and parametric polymorphism in Java, so they can’t rely on functional programming.
Perl doesn’t have type inference. It has dynamic typing, but that doesn’t help with the typical use case of type inference: you write terse code without types, and the typechecker tells you what’s wrong with it without having to run it. Functional programmers often expect their programs to work correctly on the first run. Here’s some slides showing how the typechecker can detect an infinite loop bug. They’re taken from a talk by Mark-Jason Dominus, a well-known Perl programmer.
Java’s implementation of parametric polymorphism is broken. Neal Gafter’s proposal for reified generics describes the problems nicely:
Asking for a definition of functional programming is asking for a flame war over words :-) Here’s my best attempt at an explanation: when people tried to program with pure functions, they discovered a whole world of new ideas that fit together in wonderful ways. It may not be obvious when you’re looking from the outside in, thinking that “it’s just programming with functions”. For example, Chris Okasaki’s red-black trees in Haskell take only 5 short lines of code to implement the rebalancing operation, because the supporting features of Haskell turned out to be a very good fit for Okasaki’s research on data structures.
If I write,
$x = “3”; $y = $x + 2;
then Perl infers that $x is an integer. Why is that not type inference?
“Type inference” is a technical term. You can’t figure out its meaning by separately examining the words “type” and “inference” and then pasting those meanings together into something that vaguely makes sense. That would be like thinking that “real numbers” are the sort of numbers that are more real and touchy-feely than other numbers, and then arguing that −1 is obviously not “real”. In short, you have to read up. A good starting point is “Hindley-Milner type inference” on Wikipedia.
I already read that article before making my comment. It says, “Type inference refers to the ability to deduce automatically the type of a value in a programming language.” And Perl does that. Perl therefore does type inference.
Perl doesn’t use the Hindley-Milner algorithm, which that article says was “the algorithm first used to perform type inference”. Not “the only way to do type inference.”
More importantly, my question about type inference is a child of my question about why this is called a property of functional programming languages. A language doesn’t have to be functional to do type inference, and a functional language doesn’t have to do type inference. LISP doesn’t do type inference. So why do people bring up type inference when defining functional programming?
People don’t say Perl does type inference because Perl doesn’t do type inference in the technical sense of the term. People have other technical terms for what Perl is doing, e.g. “implicit type conversion”. In your example, Perl does not infer that “3″ belongs to the type of integers, because that is false. First off, Perl doesn’t subdivide scalars into types like integers and strings. Secondly, if it did have separate types for integers and strings, then “3” would belong to the type of strings, which would have a different memory representation than integers. What Perl does is insert a silent conversion operation at runtime, which can sometimes give weird results, e.g. in your example it would happily treat “a” as the integer zero. (Type inference, my ass!) Similarly, there could be a conversion operation from strings like “$150” to values of type Money, but that doesn’t mean you should be talking about “type inference” to say such strings “are” values of type Money.
People bring up type inference when discussing functional programming because Haskell and ML use variants of the Hindley-Milner algorithm. I don’t want to talk about “defining” functional programming, or discuss whether some language should or shouldn’t be labeled “functional”, because such questions are ultimately about words and therefore useless.
Indeed, but what Perl does still isn’t type inference; type inference is something done at compile time, and doesn’t really make sense outside of a static type system. The first-sentence summary of a Wikipedia article should not be taken as a definition.
But then what /should/ be taken as a definition? I’m saying, “Saying that ‘type inference’ means ‘type inference’ makes sense; and Wikipedia agrees.” You say no. Citation needed.
Here’s an example of what I call type inference in Perl. It happens at execution, not at compile time. It’s also an example of why uncontrolled inference of types is bad:
sub complement { my ($string) = @_; $string =~ tr/acgt/tgca/; return $string; }
$fasta = “aaaccc”; $fasta = &complement(reverse($fasta)); print “$fasta\n”;
What do you think this code will produce?
EDIT: Your comment has been substantially edited so looks like I’ll have to do the same for mine.
“tttggg\n”, since Perl “reverse” is for reversing lists, right? But it’s not clear what that has to do with type inference. You seem to once again be using “type inference” to mean “implicit type conversion”. There’s no need to call this “type inference” because it already had a name. If Perl used type inference—and did not implicitly convert between scalars and lists of one scalar—the program would never even start, because it would contain a type error. Again, the notion of “type inference” only applies in a static typing system, so it’s not really applicable to Perl.
If you want an actual definition, then here is my attempt: “Type inference” refers to compile-time inference of types so that type errors can be detected.
Since you work in AI, you have probably heard of the unification algorithm (of Prolog fame). Well, when a “languages” geek says “type inference” they mean something whose implementation involves the unification algorithm (among other things). (Note that most programmers using a compiler that does type inference probably do not know that it involves the unification algorithm. In other words, type inference is usually explained without making reference to unification.)
The same “languages” geek would refer to what is happening in your example as “coercion”. In particular, a string is being coerced to an integer.