The functional/imperative distinction is not a real one
How is the distinction between functional and imperative programming languages “not a real one”? I suppose you mean that there’s a continuum of language designs between purely functional and purely imperative. And I’ve seen people argue that you can program functionally in python or emulate imperative programming in Haskell. Sure. That’s all true. It doesn’t change the fact that functional-style programming is manifestly more machine checkable in the average (and best) case.
it’s less important to provability than languages’ complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.
Agreed. The most poorly programmed functional programs will be harder to machine check than the mostly skillfully designed imperative programs. But I think for typical programming scenarios or best case scenarios, functional-style programming makes it hands-down more natural to write the correct kind of structures that can be reliably machine checked and imperative programming languages just don’t.
The entry level functional programming course is going to focus on all the right things: type theory, model theory, deep structure. The first imperative programming course at most universities is going to teach you how to leverage side-effects, leverage side-effects more, and generally design your code in a way that makes it less tractable for verification and validation later on.
How is the distinction between functional and imperative programming languages “not a real one”?
“Not a real one” is sort of glib. Still, I think Jim’s point stands.
The two words “functional” and “imperative” do mean different things. The problem is that, if you want to give a clean definition of either, you wind up talking about the “cultures” and “mindsets” of the programmers that use and design them, rather than actual features of the language. Which starts making sense, really, when you note “functional vs. imperative” is a perennial holy war, and that these terms have become the labels for sides in that war, rather than precise technical positions.
I mean, I am somewhat partisan in that war, and rather agree that, e.g., we should point new programmers to Scheme rather than Python or Java. But presenting “functional” vs. “imperative” as the major division in thinking about programming languages is epistemically dirty, when there’s so many distinctions between languages that matter just as much, and describe things more precisely.
I’ve wrote about the difference between imperative programs and functional ones (here, trying to explain why an imperative programmer might find functional programming difficult).
The main differences (do vs be, reversed order of reading, and heavy use of first class functions), make a quite sharp divide. Way sharper than other divides, such as “Object Oriented” versus the rest (though Alan Kay’s original vision is probably more distinguishable than the current C++/Java vision).
Now when talking about programming an FAI, the most probable course of action will be to translate the math of FAI into a program. One thing I noticed about math formalism outside the specific field of computer programming, is that most formulations are either stateless, or explicit about state. When at some point we say X = some long expression, we know it won’t change until the end of the assignment. The math I know tend to be functional by default.
Pretty much by definition all (Turing-complete) programming languages can in principle be transformed into each other, it’s not even very hard: just take the ASM code and build some rudimentary reverse compiler that creates some strange looking code in your desired goal language.
For practical purposes, machine checkability is easier for functional languages, but it’s a difference in degree, not one in kind. (Corrections welcome!)
that creates some strange looking code in your desired goal language.
The code wouldn’t just look strange, it would likely be more complex than code written directly in the language using its standard style at a high level to solve the same problem. The reversed compiled code would be harder to analyze, and harder to know what would be useful to prove about it, than code written directly in the target language with the reasoning behind why the programmer expects it to work serving as a guide to construct the proof.
Yes, however from what I recall about such proving methods, they were quite formulaic and about an equal amount of pain for each line (I should dig out some references some time), so the difference may not be as pronounced. Be that as it may, I agree that there is a difference in degree, which does matter for practical purposes.
I expect it is more than a difference in degree. When the programmer can tell the automated theorem prover that the return value of a certain function always satisfies some condition that other functions rely on, the theorem prover can try to prove that condition holds and use that fact for other proofs without being distracted by other possible conditions that are not relevant. This makes the search space much smaller.
The point is that the programmer having some understanding of the proof checker, and designing the code to be not just correct, but provably correct in a reasonable amount of time makes a big difference from the general problem of proving things about arbitrary code where you run into the halting problem.
I agree with both your example (“making the search space much smaller”) and your explanation (“makes a big difference”), we just differ on whether thus changing the resource requirements constitutes a change in kind, or one in degree.
Also, I took your “code would likely be more complex” as “be less human-understandable”, as opposed to its actual Kolmogorov complexity, which afaik would be identical.
How is the distinction between functional and imperative programming languages “not a real one”? I suppose you mean that there’s a continuum of language designs between purely functional and purely imperative.
Not exactly. There is a functional/imperative distinction, but I don’t think it’s located in the language; it’s more a matter of style and dialect. The majority of the difference between functional style and imperative style is in how you deal with collections. In functional style, you use map, filter, fold and the like, mostly treat them as immutable and create new ones, and use a lot of lambda expressions to support this. In imperative style, you emulate the collection operators using control flow constructs. Most major programming languages today support both syles, and the two styles act as dialects. (The main exceptions are non-garbage-collected languages, which can’t use the functional style because it interacts badly with object ownership, and Java, which lacks lambda expressions as a symptom of much bigger problems).
These styles are less different than they appear. A lot of use of mutation is illusory; it matches to a palette of a dozen or so recognizable patterns which could just as easily be written in functional form. In fact, ReSharper can automatically translate a lot of C# between these two styles, in a provably-correct fashion; and if you want to prove complicated things about programs, the infrastructure to make that sort of thing easy is part of the price of admission.
But there’s a catch. Programmers who start in a functional language and avoid the imperative style don’t learn the palette of limited mutations, and to them, imperative-style code is more inscrutable than to programmers who learned both styles. And while I’m much less certain of this, I think there may be an order-dependence, where programmers who learn imperative style first and then functional do better than those who learn them in reverse order. And I don’t think it’s possible to get to the higher skill levels without a thorough grasp of both.
Well, I figure I don’t really want to recommend a ton of programming courses anyway. I’m already recommending what I presume is more than a bachelor’s degree worth of course when pre-reqs and outside requirements at these universities are taken into account.
So if someone takes one course, they can learn so much more that helps them later in this curriculum from the applied, function programming course than its imperative counterpart. And the normal number of functional programming courses that people take in a traditional math or CS program is 0. So I have to make a positive recommendation here to correct this. I couldn’t make people avoid imperative programming courses anyway, even if I tried. So people will oversample them (and follow your implied recommendation) relative to my core recommendations anyway.
So in practice, most people will follow your advice, by following mine and actually studying some functional programming instead of none and then study a ton of imperative programming no matter what anyone says.
Where did that come from? I didn’t spot anything wrong in his comment, and I’m pretty knowledgeable myself (I’m no authority, but I believe my grasp of FP is quite comprehensive).
(Edit: Retracted: I now see it did came from somewhere: several comments, including the top one)
Functional programming isn’t an idiomatic approach to container manipulation, it’s a paradigm that avoids mutable state and data. Write a GUI in Haskell using pure functions to see how different the functional approach is and what it is at its core. Or just compare a typical textbook on imperative algorithms with one on functional algorithms. Container manipulation with functions is just an idiom.
And sure you can write functional code in C++, for example (which by the way has map, filter, fold, and so on), but you can also write OO code in C. But few people do, and for a reason: the language makes it near impossible or, at the very least, undesirable for humans. That’s close enough for the distinction being located in the language.
Functional programming isn’t an idiomatic approach to container manipulation, it’s a paradigm that avoids mutable state and data.
I’m not sure you and jimrandomh actually disagree on that point. I mean, avoiding mutable state is bound to change your approach to container manipulation. You described the root cause, he described the surface differences.
Also,
And sure you can write functional code in C++, for example (which by the way has map, filter, fold, and so on), but you can also write OO code in C. But few people do, and for a reason: […]
jimrandomh knows that:
(The main exceptions are non-garbage-collected languages, which can’t use the functional style because it interacts badly with object ownership, and Java, which lacks lambda expressions as a symptom of much bigger problems).
I personally tried functional style in C++, and did feel the pain ( is unusable before C++11). There are ways however to limit the damage. And languages such as Python and Javascript do not discourage the functional style so much. Their libraries do.
Now sure, languages do favour a style over the other, like Ocaml vs Lua. It does let us classify them meaningfully. On the other hand, it is quite easy to go against the default. I’m pretty sure both you and jimrandomh agree with that as well.
From the look of it, you just talked past each other, while there was no real disagreement to begin with…
In light of the following comment by jim, I think we do disagree:
Please be careful about exposing programmers to ideology; it frequently turns into politics kills their minds. This piece in particular is a well-known mindkiller, and I have personally witnessed great minds acting very stupid because of it. The functional/imperative distinction is not a real one, and even if it were, it’s less important to provability than languages’ complexity, the quality of their type systems and the amount of stupid lurking in their dark corners.
And while I would normally interpret jim’s nearest comment in this thread charitably (i.e., mostly in agreement with me), it’s more reasonable to interpret in light of quoted comment.
I think he probably doesn’t or didn’t understand the functional paradigm. If he did, I think he would know about its usefulness in concurrent or parallel programming, and consequently know that it is not just a mind-killing ideology like US political parties, but a paradigm with real advantages and real disadvantages over other paradigms. I don’t think he would have written his first comment if he really knew that. I think he’s probably confusing the functional idiomatic approach/style/dialect/whatever with the functional paradigm. I mean he says “The majority of the difference between functional style and imperative style is in how you deal with collections.” And remember this thread was created in reference to a comment about a textbook on functional programming (not functional “style”—maybe he’s backpedaling or charitably he means fp).
(also c++ is a non-garbage-collected language. And more importantly I don’t mean to shit on jim. I’m more worried about how many people thought it was a comment worth being at the top of the comment section in a thread about course recommendations for FAI researchers. I would have been fine ignoring it otherwise)
The functional/imperative distinction is not a real one
Oops.
I think he’s probably confusing the functional idiomatic approach/style/dialect/whatever with the functional paradigm. I mean he says “The majority of the difference between functional style and imperative style is in how you deal with collections.”
If I got it, you are saying that he perceived the surface features (dealing with collections), but not their deeper cause (avoid mutable state). Sounds about right. Re-oops, I guess.
I don’t mean to shit on jim.
Now it occurred to me that I may have forced you to. Sorry.
There is a functional/imperative distinction, but I don’t think it’s located in the language
It can be—Haskell is purely functional, I would say early FORTRAN is purely imperative.
and Java, which lacks lambda expressions as a symptom of much bigger problems
Java has true closures (anonymous inner classes). But you don’t want to use them. Or Java for that matter :).
I used to like functional programming much more when younger, before I realized the entire point of functional programming is to hide the causality of the program from the human, and the human needs the causality of the program to reason about the program and to debug. Sure, functional programs are easier for computers to handle, but human time is more precious.
Debugging non-trivial functional program projects requires the reinvention of imperative (causal) style (if you are very smart, this happens on the fly in your head, and the program still looks functional).
That this article wrote about the functional/imperative distinction in the way it did is a reminder that lesswrong is an attractor for holy war prone young people :). Keep religion out of science please!
the entire point of functional programming is to hide the causality of the program from the human
Why? I would say it’s the opposite (and really the causality being clear and obvious is just a corollary of referential transparency). The difficulty of reasoning about concurrent/parallel code in an imperative language, for example, is one of the largest selling points of functional programming languages like erlang and haskell.
The causality in a functional language is far from obvious. Consider Haskell, a language that is both purely functional and lazy, and is considered somewhat of a beautiful poster child of the functional approach. Say you write a program and it has a bug—it’s not doing what it’s supposed to. How would you debug it? Some alternatives:
(a) Use a debugger to step through a program until it does something it’s not supposed to (this entails dealing with a causal order of evaluation of statements—something Haskell as a lazy and functional language is explicitly hiding from you until you start a debugger).
(b) Use good ol’ print statements. These will appear in a very strange order because of lazy evaluation. Again, Haskell hides the true order—the true order has nothing to do with the way the code appears on the page. This makes it difficult to build a causal model of what’s going on in your program. A causal model is what you need if you want to use print statements to debug.
(c) Intervene in a program by changing some intermediate runtime value to see what would happen to the output. As a functional language, Haskell does not allow you to change state (ignoring monads which are a very complicated beast, and at any rate would not support a straightforward value change while debugging anyways).
My claim is that causality is so central to how human beings think about complex computer programs that it is not possible to write and debug large programs written in functional style without either building a causal model of your program (something most functional language will fight with you about to the extent that they are functional), or mostly sticking to an imperative “causal” style, and only use simple functional idioms that you know work and that do not require further thinking (like map and reduce, and simple closure use). Note that even Haskell, a language committed to “wearing the hair shirt” (http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/haskellretrospective.pdf) of functional programming, has given concessions to the imperative/causal writing style by providing the “do” shorthand.
Personally, I love functional idioms, and I think functional code with heavy recursion use is often quite beautiful. But I don’t think the functional approach is well suited to writing complex code precisely because it is violating a principal rule of computer science—make things easy for the human at the expense of the machine.
Laziness can muddy the waters, but it’s also optional in functional programming. People using haskell in a practical setting usually avoid it and are coming up with new language extensions to make strict evaluation the default (like in records for example).
What you’re really saying is the causal link between assembly and the language is less obvious, which is certainly true as it is a very high level language. However, if we’re talking about the causality of the language itself, then functional languages enforce a more transparent causal structure of the code itself.
You can be certain that a function that isn’t tainted by IO in haskell, for example, isn’t going to involve dozens of different causal structures. An imperative function like AnimalFactory.create(“dog”) could involve dozens of different dependencies (e.g. through singletons or dependency injection) making the dependency graph (and causal structure) obfuscated. This lack of transparent guarantees about state and dependencies in imperative languages makes concurrent/parallelprogramming (and even plain code) very difficult to reason about and test.
Moreover, the concessions that haskell has given way to are probably temporary. Haskell is a research language and functional solutions to problems like IO and event driven programs have been put forward but are not yet widely accepted. And even ignoring these solutions, you still have a basic paradigm where you have top level imperative style code with everything else being functional.
And while it can be more difficult to debug functional programs, they’re easier to test, and they’re less prone to runtime bugs. And really, the debugging problem is one of laziness and difficult to use debuggers. Debugging F# with visual studio’s debugger isn’t that difficult.
(Note: that when I talk about functional programming, I’m talking about a paradigm that avoids mutable state and data rather than idiomatic approaches to container manipulation)
Functional-style programming doesn’t make it any more natural, it just forbids you from doing things any other way. I spend most of my time when dealing with functional-style programming (primarily in XSLT) trying to figure out ways around the constraints imposed by the language rather than actually solving the problem I’m working on.
In XSLT I once copied a chunk of code 8 times, replacing its recursive function calls with its own code, because it was blowing up the stack; and it’s not like I could use mutable variables and skip the recursion, it was literally the only implementation possible. And it had to call itself in multiple places of its own code; it couldn’t be phrased in a tail-recursion friendly fashion. Meaning that for that code, no functional language could have resolved the stack explosion issue. -That’s- functional programming to me; dysfunctional.
[ETA: Apparently there is a pattern which would overcome the tail stack issue, and compilers exist which can take advantage of it, so my statement that “No functional language could have resolved the stack explosion issue” was false.]
no functional language could have resolved the stack explosion issue
A functional language could compile to code that uses references to continuations (functions to call, with parameters) instead of the physical stack. See continuation passing style. (The language wouldn’t need to use continuation passing style itself, the conversion would straightforward for the compiler to do.)
That, to me, falls under “Trying to figure out ways around the constraints imposed by the language.”
I suspect it would have made for even messier code, as well, provided it would have even worked for that particular problem. I was parsing a deliberately and ingeniously obfuscated document, and I was updating particular pieces of information based on information obtained in recursive references which, needless to say, were obfuscated, including false flag references which would get me incorrect information if followed; the falsity couldn’t be determined locally, either, and required complex conditional comparisons to information contained in parent references. Some of the references were cyclical, to boot.
That, to me, falls under “Trying to figure out ways around the constraints imposed by the language.”
Functional languages do not have inherent stack limits. A stack limit is imposed by a particular implementation of the language, and what I described is how a different implementation of the language could have a much larger stack limit, (constrained by total memory rather than memory initially allocated to the physical stack), with no difference in the source code because the compiler can make the transformation to continuation passing style for you.
The point is that this problem you had with XSLT does not generalize to all possible functional languages the way you think it does.
The parts of XSLT that cause me problems are the functional programming parts; not every problem is well-suited to functional programming, especially when it comes to, as in that case, parsing complex interconnected documents to find information (ETA: Particularly when that information isn’t guaranteed to be in any one particular place, and you have to conditionally check multiple locations, each of which itself may be referenced in one of multiple locations that have to be conditionally checked).
Functional programming takes a mathematical approach to problem-solving. There are some problems it is exceedingly elegant at solving. The issue is that it isn’t any more elegant than a well-written alternative in another language, and it makes that elegance mandatory, which causes severe problems when you’re dealing with an inelegant problem.
I’m not hired to solve elegant problems. I’m hired to solve the problems that companies have spent 20 million dollars to fail to solve.
The parts of XSLT that cause me problems are the functional programming parts; not every problem is well-suited to functional programming, especially when it comes to, as in that case, parsing complex interconnected documents to find information
So your example of how ‘functional programming fails’ is to use a vague personal anecdote about possibly the worst ‘functional’ language in the world, many versions of which don’t even have higher-order functions which is a basic key functional feature dating back literally to the 1960s, and of which people have published research papers just to prove it is Turing-complete?
Do you understand why no one is going to find your claim that “functional programming sucks because I once wrote a bad program in XSLT” even remotely convincing? Even if you do boast about yourself that
I’m not hired to solve elegant problems. I’m hired to solve the problems that companies have spent 20 million dollars to fail to solve.
The problem was done under an NDA, so I can’t get too specific. The shortest explanation, however, is “Parsing obfuscated document.” It was a problem which was intentionally created by somebody to be as difficult as possible to solve in a programming language.
Functional programming doesn’t suck because of that problem; it would have been difficult in ANY language. Functional programming sucks because it deliberately prevents you from doing things. As a rule, any time anybody says “You shouldn’t be doing that,” it is because they lack imagination as to why you would need to do that. Functional programming is designed around the principle that “You shouldn’t be doing that.”
How is the distinction between functional and imperative programming languages “not a real one”? I suppose you mean that there’s a continuum of language designs between purely functional and purely imperative. And I’ve seen people argue that you can program functionally in python or emulate imperative programming in Haskell. Sure. That’s all true. It doesn’t change the fact that functional-style programming is manifestly more machine checkable in the average (and best) case.
Agreed. The most poorly programmed functional programs will be harder to machine check than the mostly skillfully designed imperative programs. But I think for typical programming scenarios or best case scenarios, functional-style programming makes it hands-down more natural to write the correct kind of structures that can be reliably machine checked and imperative programming languages just don’t.
The entry level functional programming course is going to focus on all the right things: type theory, model theory, deep structure. The first imperative programming course at most universities is going to teach you how to leverage side-effects, leverage side-effects more, and generally design your code in a way that makes it less tractable for verification and validation later on.
“Not a real one” is sort of glib. Still, I think Jim’s point stands.
The two words “functional” and “imperative” do mean different things. The problem is that, if you want to give a clean definition of either, you wind up talking about the “cultures” and “mindsets” of the programmers that use and design them, rather than actual features of the language. Which starts making sense, really, when you note “functional vs. imperative” is a perennial holy war, and that these terms have become the labels for sides in that war, rather than precise technical positions.
I mean, I am somewhat partisan in that war, and rather agree that, e.g., we should point new programmers to Scheme rather than Python or Java. But presenting “functional” vs. “imperative” as the major division in thinking about programming languages is epistemically dirty, when there’s so many distinctions between languages that matter just as much, and describe things more precisely.
(Jim: fair rephrasing?)
I’ve wrote about the difference between imperative programs and functional ones (here, trying to explain why an imperative programmer might find functional programming difficult).
The main differences (do vs be, reversed order of reading, and heavy use of first class functions), make a quite sharp divide. Way sharper than other divides, such as “Object Oriented” versus the rest (though Alan Kay’s original vision is probably more distinguishable than the current C++/Java vision).
Now when talking about programming an FAI, the most probable course of action will be to translate the math of FAI into a program. One thing I noticed about math formalism outside the specific field of computer programming, is that most formulations are either stateless, or explicit about state. When at some point we say
X = some long expression
, we know it won’t change until the end of the assignment. The math I know tend to be functional by default.Pretty much by definition all (Turing-complete) programming languages can in principle be transformed into each other, it’s not even very hard: just take the ASM code and build some rudimentary reverse compiler that creates some strange looking code in your desired goal language.
For practical purposes, machine checkability is easier for functional languages, but it’s a difference in degree, not one in kind. (Corrections welcome!)
The code wouldn’t just look strange, it would likely be more complex than code written directly in the language using its standard style at a high level to solve the same problem. The reversed compiled code would be harder to analyze, and harder to know what would be useful to prove about it, than code written directly in the target language with the reasoning behind why the programmer expects it to work serving as a guide to construct the proof.
Yes, however from what I recall about such proving methods, they were quite formulaic and about an equal amount of pain for each line (I should dig out some references some time), so the difference may not be as pronounced. Be that as it may, I agree that there is a difference in degree, which does matter for practical purposes.
I expect it is more than a difference in degree. When the programmer can tell the automated theorem prover that the return value of a certain function always satisfies some condition that other functions rely on, the theorem prover can try to prove that condition holds and use that fact for other proofs without being distracted by other possible conditions that are not relevant. This makes the search space much smaller.
The point is that the programmer having some understanding of the proof checker, and designing the code to be not just correct, but provably correct in a reasonable amount of time makes a big difference from the general problem of proving things about arbitrary code where you run into the halting problem.
I agree with both your example (“making the search space much smaller”) and your explanation (“makes a big difference”), we just differ on whether thus changing the resource requirements constitutes a change in kind, or one in degree.
Also, I took your “code would likely be more complex” as “be less human-understandable”, as opposed to its actual Kolmogorov complexity, which afaik would be identical.
Not exactly. There is a functional/imperative distinction, but I don’t think it’s located in the language; it’s more a matter of style and dialect. The majority of the difference between functional style and imperative style is in how you deal with collections. In functional style, you use map, filter, fold and the like, mostly treat them as immutable and create new ones, and use a lot of lambda expressions to support this. In imperative style, you emulate the collection operators using control flow constructs. Most major programming languages today support both syles, and the two styles act as dialects. (The main exceptions are non-garbage-collected languages, which can’t use the functional style because it interacts badly with object ownership, and Java, which lacks lambda expressions as a symptom of much bigger problems).
These styles are less different than they appear. A lot of use of mutation is illusory; it matches to a palette of a dozen or so recognizable patterns which could just as easily be written in functional form. In fact, ReSharper can automatically translate a lot of C# between these two styles, in a provably-correct fashion; and if you want to prove complicated things about programs, the infrastructure to make that sort of thing easy is part of the price of admission.
But there’s a catch. Programmers who start in a functional language and avoid the imperative style don’t learn the palette of limited mutations, and to them, imperative-style code is more inscrutable than to programmers who learned both styles. And while I’m much less certain of this, I think there may be an order-dependence, where programmers who learn imperative style first and then functional do better than those who learn them in reverse order. And I don’t think it’s possible to get to the higher skill levels without a thorough grasp of both.
Well, I figure I don’t really want to recommend a ton of programming courses anyway. I’m already recommending what I presume is more than a bachelor’s degree worth of course when pre-reqs and outside requirements at these universities are taken into account.
So if someone takes one course, they can learn so much more that helps them later in this curriculum from the applied, function programming course than its imperative counterpart. And the normal number of functional programming courses that people take in a traditional math or CS program is 0. So I have to make a positive recommendation here to correct this. I couldn’t make people avoid imperative programming courses anyway, even if I tried. So people will oversample them (and follow your implied recommendation) relative to my core recommendations anyway.
So in practice, most people will follow your advice, by following mine and actually studying some functional programming instead of none and then study a ton of imperative programming no matter what anyone says.
I don’t think you understand functional programming. What background are you coming from?
Where did that come from? I didn’t spot anything wrong in his comment, and I’m pretty knowledgeable myself (I’m no authority, but I believe my grasp of FP is quite comprehensive).
(Edit: Retracted: I now see it did came from somewhere: several comments, including the top one)
Functional programming isn’t an idiomatic approach to container manipulation, it’s a paradigm that avoids mutable state and data. Write a GUI in Haskell using pure functions to see how different the functional approach is and what it is at its core. Or just compare a typical textbook on imperative algorithms with one on functional algorithms. Container manipulation with functions is just an idiom.
And sure you can write functional code in C++, for example (which by the way has map, filter, fold, and so on), but you can also write OO code in C. But few people do, and for a reason: the language makes it near impossible or, at the very least, undesirable for humans. That’s close enough for the distinction being located in the language.
I’m not sure you and jimrandomh actually disagree on that point. I mean, avoiding mutable state is bound to change your approach to container manipulation. You described the root cause, he described the surface differences.
Also,
jimrandomh knows that:
I personally tried functional style in C++, and did feel the pain ( is unusable before C++11). There are ways however to limit the damage. And languages such as Python and Javascript do not discourage the functional style so much. Their libraries do.
Now sure, languages do favour a style over the other, like Ocaml vs Lua. It does let us classify them meaningfully. On the other hand, it is quite easy to go against the default. I’m pretty sure both you and jimrandomh agree with that as well.
From the look of it, you just talked past each other, while there was no real disagreement to begin with…
In light of the following comment by jim, I think we do disagree:
And while I would normally interpret jim’s nearest comment in this thread charitably (i.e., mostly in agreement with me), it’s more reasonable to interpret in light of quoted comment.
I think he probably doesn’t or didn’t understand the functional paradigm. If he did, I think he would know about its usefulness in concurrent or parallel programming, and consequently know that it is not just a mind-killing ideology like US political parties, but a paradigm with real advantages and real disadvantages over other paradigms. I don’t think he would have written his first comment if he really knew that. I think he’s probably confusing the functional idiomatic approach/style/dialect/whatever with the functional paradigm. I mean he says “The majority of the difference between functional style and imperative style is in how you deal with collections.” And remember this thread was created in reference to a comment about a textbook on functional programming (not functional “style”—maybe he’s backpedaling or charitably he means fp).
(also c++ is a non-garbage-collected language. And more importantly I don’t mean to shit on jim. I’m more worried about how many people thought it was a comment worth being at the top of the comment section in a thread about course recommendations for FAI researchers. I would have been fine ignoring it otherwise)
Let’s see:
Oops.
If I got it, you are saying that he perceived the surface features (dealing with collections), but not their deeper cause (avoid mutable state). Sounds about right. Re-oops, I guess.
Now it occurred to me that I may have forced you to. Sorry.
It can be—Haskell is purely functional, I would say early FORTRAN is purely imperative.
Java has true closures (anonymous inner classes). But you don’t want to use them. Or Java for that matter :).
I used to like functional programming much more when younger, before I realized the entire point of functional programming is to hide the causality of the program from the human, and the human needs the causality of the program to reason about the program and to debug. Sure, functional programs are easier for computers to handle, but human time is more precious.
Debugging non-trivial functional program projects requires the reinvention of imperative (causal) style (if you are very smart, this happens on the fly in your head, and the program still looks functional).
That this article wrote about the functional/imperative distinction in the way it did is a reminder that lesswrong is an attractor for holy war prone young people :). Keep religion out of science please!
Why? I would say it’s the opposite (and really the causality being clear and obvious is just a corollary of referential transparency). The difficulty of reasoning about concurrent/parallel code in an imperative language, for example, is one of the largest selling points of functional programming languages like erlang and haskell.
The causality in a functional language is far from obvious. Consider Haskell, a language that is both purely functional and lazy, and is considered somewhat of a beautiful poster child of the functional approach. Say you write a program and it has a bug—it’s not doing what it’s supposed to. How would you debug it? Some alternatives:
(a) Use a debugger to step through a program until it does something it’s not supposed to (this entails dealing with a causal order of evaluation of statements—something Haskell as a lazy and functional language is explicitly hiding from you until you start a debugger).
(b) Use good ol’ print statements. These will appear in a very strange order because of lazy evaluation. Again, Haskell hides the true order—the true order has nothing to do with the way the code appears on the page. This makes it difficult to build a causal model of what’s going on in your program. A causal model is what you need if you want to use print statements to debug.
(c) Intervene in a program by changing some intermediate runtime value to see what would happen to the output. As a functional language, Haskell does not allow you to change state (ignoring monads which are a very complicated beast, and at any rate would not support a straightforward value change while debugging anyways).
My claim is that causality is so central to how human beings think about complex computer programs that it is not possible to write and debug large programs written in functional style without either building a causal model of your program (something most functional language will fight with you about to the extent that they are functional), or mostly sticking to an imperative “causal” style, and only use simple functional idioms that you know work and that do not require further thinking (like map and reduce, and simple closure use). Note that even Haskell, a language committed to “wearing the hair shirt” (http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/haskellretrospective.pdf) of functional programming, has given concessions to the imperative/causal writing style by providing the “do” shorthand.
Personally, I love functional idioms, and I think functional code with heavy recursion use is often quite beautiful. But I don’t think the functional approach is well suited to writing complex code precisely because it is violating a principal rule of computer science—make things easy for the human at the expense of the machine.
Laziness can muddy the waters, but it’s also optional in functional programming. People using haskell in a practical setting usually avoid it and are coming up with new language extensions to make strict evaluation the default (like in records for example).
What you’re really saying is the causal link between assembly and the language is less obvious, which is certainly true as it is a very high level language. However, if we’re talking about the causality of the language itself, then functional languages enforce a more transparent causal structure of the code itself.
You can be certain that a function that isn’t tainted by IO in haskell, for example, isn’t going to involve dozens of different causal structures. An imperative function like AnimalFactory.create(“dog”) could involve dozens of different dependencies (e.g. through singletons or dependency injection) making the dependency graph (and causal structure) obfuscated. This lack of transparent guarantees about state and dependencies in imperative languages makes concurrent/parallelprogramming (and even plain code) very difficult to reason about and test.
Moreover, the concessions that haskell has given way to are probably temporary. Haskell is a research language and functional solutions to problems like IO and event driven programs have been put forward but are not yet widely accepted. And even ignoring these solutions, you still have a basic paradigm where you have top level imperative style code with everything else being functional.
And while it can be more difficult to debug functional programs, they’re easier to test, and they’re less prone to runtime bugs. And really, the debugging problem is one of laziness and difficult to use debuggers. Debugging F# with visual studio’s debugger isn’t that difficult.
(Note: that when I talk about functional programming, I’m talking about a paradigm that avoids mutable state and data rather than idiomatic approaches to container manipulation)
Functional-style programming doesn’t make it any more natural, it just forbids you from doing things any other way. I spend most of my time when dealing with functional-style programming (primarily in XSLT) trying to figure out ways around the constraints imposed by the language rather than actually solving the problem I’m working on.
In XSLT I once copied a chunk of code 8 times, replacing its recursive function calls with its own code, because it was blowing up the stack; and it’s not like I could use mutable variables and skip the recursion, it was literally the only implementation possible. And it had to call itself in multiple places of its own code; it couldn’t be phrased in a tail-recursion friendly fashion. Meaning that for that code, no functional language could have resolved the stack explosion issue. -That’s- functional programming to me; dysfunctional.
[ETA: Apparently there is a pattern which would overcome the tail stack issue, and compilers exist which can take advantage of it, so my statement that “No functional language could have resolved the stack explosion issue” was false.]
A functional language could compile to code that uses references to continuations (functions to call, with parameters) instead of the physical stack. See continuation passing style. (The language wouldn’t need to use continuation passing style itself, the conversion would straightforward for the compiler to do.)
That, to me, falls under “Trying to figure out ways around the constraints imposed by the language.”
I suspect it would have made for even messier code, as well, provided it would have even worked for that particular problem. I was parsing a deliberately and ingeniously obfuscated document, and I was updating particular pieces of information based on information obtained in recursive references which, needless to say, were obfuscated, including false flag references which would get me incorrect information if followed; the falsity couldn’t be determined locally, either, and required complex conditional comparisons to information contained in parent references. Some of the references were cyclical, to boot.
Functional languages do not have inherent stack limits. A stack limit is imposed by a particular implementation of the language, and what I described is how a different implementation of the language could have a much larger stack limit, (constrained by total memory rather than memory initially allocated to the physical stack), with no difference in the source code because the compiler can make the transformation to continuation passing style for you.
The point is that this problem you had with XSLT does not generalize to all possible functional languages the way you think it does.
Hm. Granted.
In fairness, I think it does generalize to most implementations.
I think I see the problem here.
Perlis comes to mind:
The parts of XSLT that cause me problems are the functional programming parts; not every problem is well-suited to functional programming, especially when it comes to, as in that case, parsing complex interconnected documents to find information (ETA: Particularly when that information isn’t guaranteed to be in any one particular place, and you have to conditionally check multiple locations, each of which itself may be referenced in one of multiple locations that have to be conditionally checked).
Functional programming takes a mathematical approach to problem-solving. There are some problems it is exceedingly elegant at solving. The issue is that it isn’t any more elegant than a well-written alternative in another language, and it makes that elegance mandatory, which causes severe problems when you’re dealing with an inelegant problem.
I’m not hired to solve elegant problems. I’m hired to solve the problems that companies have spent 20 million dollars to fail to solve.
So your example of how ‘functional programming fails’ is to use a vague personal anecdote about possibly the worst ‘functional’ language in the world, many versions of which don’t even have higher-order functions which is a basic key functional feature dating back literally to the 1960s, and of which people have published research papers just to prove it is Turing-complete?
Do you understand why no one is going to find your claim that “functional programming sucks because I once wrote a bad program in XSLT” even remotely convincing? Even if you do boast about yourself that
The problem was done under an NDA, so I can’t get too specific. The shortest explanation, however, is “Parsing obfuscated document.” It was a problem which was intentionally created by somebody to be as difficult as possible to solve in a programming language.
Functional programming doesn’t suck because of that problem; it would have been difficult in ANY language. Functional programming sucks because it deliberately prevents you from doing things. As a rule, any time anybody says “You shouldn’t be doing that,” it is because they lack imagination as to why you would need to do that. Functional programming is designed around the principle that “You shouldn’t be doing that.”