Perl doesn’t have type inference. It has dynamic typing, but that doesn’t help with the typical use case of type inference: you write terse code without types, and the typechecker tells you what’s wrong with it without having to run it.
If I write,
$x = “3”; $y = $x + 2;
then Perl infers that $x is an integer. Why is that not type inference?
“Type inference” is a technical term. You can’t figure out its meaning by separately examining the words “type” and “inference” and then pasting those meanings together into something that vaguely makes sense. That would be like thinking that “real numbers” are the sort of numbers that are more real and touchy-feely than other numbers, and then arguing that −1 is obviously not “real”. In short, you have to read up. A good starting point is “Hindley-Milner type inference” on Wikipedia.
I already read that article before making my comment. It says, “Type inference refers to the ability to deduce automatically the type of a value in a programming language.” And Perl does that. Perl therefore does type inference.
Perl doesn’t use the Hindley-Milner algorithm, which that article says was “the algorithm first used to perform type inference”. Not “the only way to do type inference.”
More importantly, my question about type inference is a child of my question about why this is called a property of functional programming languages. A language doesn’t have to be functional to do type inference, and a functional language doesn’t have to do type inference. LISP doesn’t do type inference. So why do people bring up type inference when defining functional programming?
People don’t say Perl does type inference because Perl doesn’t do type inference in the technical sense of the term. People have other technical terms for what Perl is doing, e.g. “implicit type conversion”. In your example, Perl does not infer that “3″ belongs to the type of integers, because that is false. First off, Perl doesn’t subdivide scalars into types like integers and strings. Secondly, if it did have separate types for integers and strings, then “3” would belong to the type of strings, which would have a different memory representation than integers. What Perl does is insert a silent conversion operation at runtime, which can sometimes give weird results, e.g. in your example it would happily treat “a” as the integer zero. (Type inference, my ass!) Similarly, there could be a conversion operation from strings like “$150” to values of type Money, but that doesn’t mean you should be talking about “type inference” to say such strings “are” values of type Money.
People bring up type inference when discussing functional programming because Haskell and ML use variants of the Hindley-Milner algorithm. I don’t want to talk about “defining” functional programming, or discuss whether some language should or shouldn’t be labeled “functional”, because such questions are ultimately about words and therefore useless.
Perl doesn’t use the Hindley-Milner algorithm, which that article says was “the algorithm first used to perform type inference”. Not “the only way to do type inference.”
Indeed, but what Perl does still isn’t type inference; type inference is something done at compile time, and doesn’t really make sense outside of a static type system. The first-sentence summary of a Wikipedia article should not be taken as a definition.
But then what /should/ be taken as a definition?
I’m saying, “Saying that ‘type inference’ means ‘type inference’ makes sense; and Wikipedia agrees.” You say no. Citation needed.
Here’s an example of what I call type inference in Perl. It happens at execution, not at compile time. It’s also an example of why uncontrolled inference of types is bad:
sub complement {
my ($string) = @_;
$string =~ tr/acgt/tgca/;
return $string;
}
EDIT: Your comment has been substantially edited so looks like I’ll have to do the same for mine.
“tttggg\n”, since Perl “reverse” is for reversing lists, right? But it’s not clear what that has to do with type inference. You seem to once again be using “type inference” to mean “implicit type conversion”. There’s no need to call this “type inference” because it already had a name. If Perl used type inference—and did not implicitly convert between scalars and lists of one scalar—the program would never even start, because it would contain a type error. Again, the notion of “type inference” only applies in a static typing system, so it’s not really applicable to Perl.
If you want an actual definition, then here is my attempt: “Type inference” refers to compile-time inference of types so that type errors can be detected.
Since you work in AI, you have probably heard of the unification algorithm (of Prolog fame). Well, when a “languages” geek says “type inference” they mean something whose implementation involves the unification algorithm (among other things). (Note that most programmers using a compiler that does type inference probably do not know that it involves the unification algorithm. In other words, type inference is usually explained without making reference to unification.)
The same “languages” geek would refer to what is happening in your example as “coercion”. In particular, a string is being coerced to an integer.
If I write,
$x = “3”; $y = $x + 2;
then Perl infers that $x is an integer. Why is that not type inference?
“Type inference” is a technical term. You can’t figure out its meaning by separately examining the words “type” and “inference” and then pasting those meanings together into something that vaguely makes sense. That would be like thinking that “real numbers” are the sort of numbers that are more real and touchy-feely than other numbers, and then arguing that −1 is obviously not “real”. In short, you have to read up. A good starting point is “Hindley-Milner type inference” on Wikipedia.
I already read that article before making my comment. It says, “Type inference refers to the ability to deduce automatically the type of a value in a programming language.” And Perl does that. Perl therefore does type inference.
Perl doesn’t use the Hindley-Milner algorithm, which that article says was “the algorithm first used to perform type inference”. Not “the only way to do type inference.”
More importantly, my question about type inference is a child of my question about why this is called a property of functional programming languages. A language doesn’t have to be functional to do type inference, and a functional language doesn’t have to do type inference. LISP doesn’t do type inference. So why do people bring up type inference when defining functional programming?
People don’t say Perl does type inference because Perl doesn’t do type inference in the technical sense of the term. People have other technical terms for what Perl is doing, e.g. “implicit type conversion”. In your example, Perl does not infer that “3″ belongs to the type of integers, because that is false. First off, Perl doesn’t subdivide scalars into types like integers and strings. Secondly, if it did have separate types for integers and strings, then “3” would belong to the type of strings, which would have a different memory representation than integers. What Perl does is insert a silent conversion operation at runtime, which can sometimes give weird results, e.g. in your example it would happily treat “a” as the integer zero. (Type inference, my ass!) Similarly, there could be a conversion operation from strings like “$150” to values of type Money, but that doesn’t mean you should be talking about “type inference” to say such strings “are” values of type Money.
People bring up type inference when discussing functional programming because Haskell and ML use variants of the Hindley-Milner algorithm. I don’t want to talk about “defining” functional programming, or discuss whether some language should or shouldn’t be labeled “functional”, because such questions are ultimately about words and therefore useless.
Indeed, but what Perl does still isn’t type inference; type inference is something done at compile time, and doesn’t really make sense outside of a static type system. The first-sentence summary of a Wikipedia article should not be taken as a definition.
But then what /should/ be taken as a definition? I’m saying, “Saying that ‘type inference’ means ‘type inference’ makes sense; and Wikipedia agrees.” You say no. Citation needed.
Here’s an example of what I call type inference in Perl. It happens at execution, not at compile time. It’s also an example of why uncontrolled inference of types is bad:
sub complement { my ($string) = @_; $string =~ tr/acgt/tgca/; return $string; }
$fasta = “aaaccc”; $fasta = &complement(reverse($fasta)); print “$fasta\n”;
What do you think this code will produce?
EDIT: Your comment has been substantially edited so looks like I’ll have to do the same for mine.
“tttggg\n”, since Perl “reverse” is for reversing lists, right? But it’s not clear what that has to do with type inference. You seem to once again be using “type inference” to mean “implicit type conversion”. There’s no need to call this “type inference” because it already had a name. If Perl used type inference—and did not implicitly convert between scalars and lists of one scalar—the program would never even start, because it would contain a type error. Again, the notion of “type inference” only applies in a static typing system, so it’s not really applicable to Perl.
If you want an actual definition, then here is my attempt: “Type inference” refers to compile-time inference of types so that type errors can be detected.
Since you work in AI, you have probably heard of the unification algorithm (of Prolog fame). Well, when a “languages” geek says “type inference” they mean something whose implementation involves the unification algorithm (among other things). (Note that most programmers using a compiler that does type inference probably do not know that it involves the unification algorithm. In other words, type inference is usually explained without making reference to unification.)
The same “languages” geek would refer to what is happening in your example as “coercion”. In particular, a string is being coerced to an integer.