A rather big problem that technical language discussions often miss is that language ecosystems are a big deal. The bigger the mindshare a language has, the more people there are you can hire to write stuff in the language, the more people there are writing middleware and libraries you can glue together to get what you want, the more people there are discovering bugs in the language implementation, and the more literature there is in how to effectively use the language. The bigger the existing codebase for the language is, the easier it will be to find some nice place to slot your new thing in instead of rigging up life support and a special interface with the alien outside environment. Uncommon languages face an uphill battle here, unless they can have some kind of special niche they excel in. Functional programming has been around a long time, so having referential transparency by default doesn’t seem to be winning hearts and minds by itself so far. Some people think that the coming sea change from cheap sequential speedups via Moore’s Law to having to actually do some very clever things with a parallel architecture is going to favor languages where you can mostly work at a level of abstraction that doesn’t involve mutable state.
Thus far, the ecosystem thing seems to be a big practical damper on things. A Microsoft or Google sized entity going nuts and starting to do everything with functional programming would probably do a lot more in terms of language adoption via a growing pool of libraries, quality implementations, training, literature, career prospects and imitator companies than a very large amount of academic R&D work.
It’s a neat topic in general. John Backus’ Turing Award lecture (PDF) from 30 years ago was one of the early expositions of the issue, and we’re still in pretty much the same situation as was described there. There are many stories on how the higher abstraction of functional programming helped with a difficult problem, but they are often characterized by the problem being quite well-specified both in its behavior and its interface to the outside world. And practical programming today is probably even less like that than it was 30 years ago. Success stories for using functional programming in the large and in anger, like you tend to end up doing when programming things people want to pay money for, are a lot rarer than stories of elegant, self-contained programs manageable by a single programmer.
There is a support group for people who want to use functional programming in the real world.
The problem with your ecosystem argument is that since the creation of ML in the early 1970s and the creation of Haskell-like languages in the mid-1980s many languages (Smalltalk, Perl, Java, Python, Javascript, Ruby, C#) were created that overcame the ecosystem problem. To the argument that none of those languages are as much as departure from the status quo as Haskell and ML are, I respond that object-orientation was a very big change.
You are underestimating the ability of the programming profession to select the best tools and overestimating what functional programming languages can do for the average programmer in industry. The argument that removing mutation from a program will make it easier for the program to exploit multiple cores (what you call “parallelism”) for example was made in 1977 in the paper by Backus you link to and has been made constantly since then, but 34 years later, the vast majority of programs that exploits multiple cores do so in ordinary, non-functional languages.
If you have a strong interest in certain areas of math, including Friendliness theory, and you also need to spend a lot of time writing programs, well, then Haskell (or Scheme using functional techniques if static type checking rubs you the wrong way) is probably worth studying because some of the things you learn from Haskell can be used in your math work as well as your programming work.
In contrast, if you just want to make a good living as a programmer, then for you to study Haskell and other functional languages is probably mostly a distraction and not the best way to deliver value to your customers or employers. Concepts and techniques that originate in the functional languages that are useful to you will become available to you when they are added to mainstream languages (e.g., as list comprehensions were added to Python, and e.g., as anonymous functions and first-class functions were added to Javascript).
Some people think that the coming sea change from cheap sequential speedups via Moore’s Law to having to actually do some very clever things with a parallel architecture is going to favor languages where you can mostly work at a level of abstraction that doesn’t involve mutable state.
I’ve heard people saying that for years now, but from what I see in practice, the main trend driven by this situation is that parallel programming in ordinary imperative languages is becoming easier, with much better support both at language level and in terms of debuggers, profilers, etc.
If you’re working with a problem that has an inherently parallel structure, you have elegant and easy to use APIs to parallelize it in pretty much any language these days. If not, I don’t see where using a functional language would help you. (It could be that a good functional programmer will find it more easy to devise a parallelizable solution to a given problem than a typical imperative programmer, but is there actually some advantage that a functional programmer enjoys over someone skilled in traditional parallel programming?)
A rather big problem that technical language discussions often miss is that language ecosystems are a big deal. The bigger the mindshare a language has, the more people there are you can hire to write stuff in the language, the more people there are writing middleware and libraries you can glue together to get what you want, the more people there are discovering bugs in the language implementation, and the more literature there is in how to effectively use the language. The bigger the existing codebase for the language is, the easier it will be to find some nice place to slot your new thing in instead of rigging up life support and a special interface with the alien outside environment. Uncommon languages face an uphill battle here, unless they can have some kind of special niche they excel in. Functional programming has been around a long time, so having referential transparency by default doesn’t seem to be winning hearts and minds by itself so far. Some people think that the coming sea change from cheap sequential speedups via Moore’s Law to having to actually do some very clever things with a parallel architecture is going to favor languages where you can mostly work at a level of abstraction that doesn’t involve mutable state.
Thus far, the ecosystem thing seems to be a big practical damper on things. A Microsoft or Google sized entity going nuts and starting to do everything with functional programming would probably do a lot more in terms of language adoption via a growing pool of libraries, quality implementations, training, literature, career prospects and imitator companies than a very large amount of academic R&D work.
It’s a neat topic in general. John Backus’ Turing Award lecture (PDF) from 30 years ago was one of the early expositions of the issue, and we’re still in pretty much the same situation as was described there. There are many stories on how the higher abstraction of functional programming helped with a difficult problem, but they are often characterized by the problem being quite well-specified both in its behavior and its interface to the outside world. And practical programming today is probably even less like that than it was 30 years ago. Success stories for using functional programming in the large and in anger, like you tend to end up doing when programming things people want to pay money for, are a lot rarer than stories of elegant, self-contained programs manageable by a single programmer.
There is a support group for people who want to use functional programming in the real world.
The problem with your ecosystem argument is that since the creation of ML in the early 1970s and the creation of Haskell-like languages in the mid-1980s many languages (Smalltalk, Perl, Java, Python, Javascript, Ruby, C#) were created that overcame the ecosystem problem. To the argument that none of those languages are as much as departure from the status quo as Haskell and ML are, I respond that object-orientation was a very big change.
You are underestimating the ability of the programming profession to select the best tools and overestimating what functional programming languages can do for the average programmer in industry. The argument that removing mutation from a program will make it easier for the program to exploit multiple cores (what you call “parallelism”) for example was made in 1977 in the paper by Backus you link to and has been made constantly since then, but 34 years later, the vast majority of programs that exploits multiple cores do so in ordinary, non-functional languages.
If you have a strong interest in certain areas of math, including Friendliness theory, and you also need to spend a lot of time writing programs, well, then Haskell (or Scheme using functional techniques if static type checking rubs you the wrong way) is probably worth studying because some of the things you learn from Haskell can be used in your math work as well as your programming work.
In contrast, if you just want to make a good living as a programmer, then for you to study Haskell and other functional languages is probably mostly a distraction and not the best way to deliver value to your customers or employers. Concepts and techniques that originate in the functional languages that are useful to you will become available to you when they are added to mainstream languages (e.g., as list comprehensions were added to Python, and e.g., as anonymous functions and first-class functions were added to Javascript).
I’ve heard people saying that for years now, but from what I see in practice, the main trend driven by this situation is that parallel programming in ordinary imperative languages is becoming easier, with much better support both at language level and in terms of debuggers, profilers, etc.
If you’re working with a problem that has an inherently parallel structure, you have elegant and easy to use APIs to parallelize it in pretty much any language these days. If not, I don’t see where using a functional language would help you. (It could be that a good functional programmer will find it more easy to devise a parallelizable solution to a given problem than a typical imperative programmer, but is there actually some advantage that a functional programmer enjoys over someone skilled in traditional parallel programming?)