I disagree regarding your “large dataset’ argument. Very frequently, such problems can be easily solved in a functional language using immutable data structures that share portions of their structure. In your example, if that 10GB array were represented as a functional immutable tree with a high branching factor, you could efficiently copy the bits that changed, and point the rest of the tree back to the original copy, incurring a negligible performance penalty.
If I have a flat 10GB array of bytes, and I want to change an arbitrary sparse subset of those bytes, there’s no way to do that efficiently with pure functional programming.
I don’t know what the word “functional” means in the phrase “functional immutable tree”.
If I have a flat 10GB array of bytes, and I want to change an arbitrary sparse subset of those bytes, there’s no way to do that efficiently with pure functional programming.
That constitutes an indictment of functional programming only if solving problems in the real world necessitates changing arbitrary sparse subsets of flat (e.g., not tree-like) arrays >10GB in size. If not, you’ve just defended imperative programming by referring to concepts and considerations important only to imperative programming.
I agree with you that Haskell is probably not going to displace imperative programming, but I would give other reasons for my conclusion. (I will speak of Haskell instead of functional languages to avoid tedious discussion of which languages are ‘functional’ languages.) In particular, the Haskell programmer has fewer ways than the imperative programmer has to estimate the resource usage of his programs. In particular, Haskell compilers engage in certain optimizations or collections of optimizations that tend radically to alter the asymptotic time complexity or memory-usage complexity of the program being compiled—and which alterations are made depends on esoteric details of the implementation of the optimizations. One optimization used by Haskell compilers and not used by Java, Lisp, etc, compilers that seems drastically to increase the difficulty of reasoning about the resource usage of Haskell programs is strictness analysis.
One can try to defend Haskell by pointing out that a programmer in this millenium should worry about resource usage only after experience with the program shows that better resource usage is actually required, and at that time, profiling is a more reliable way of finding the places that require optimization than static analysis of the program by the programmer. Well, two responses to that. First, efficiency is often enough a consideration that cannot be ignored that it is genuinely useful for the programmer to have some ways of reasoning about it that do not depend on profiling. Second, the last time I looked in detail at Haskell, although John Hughes’s group was developing some tools for profiling Haskell programs, it is clearly a more difficult task than profiling imperative programs is, and the tools are nowhere near as mature.
I have to add the disclaimer that it has been about 8 years since I stopped reading the main Haskell mailing list, and about 6 years since I wrote my last Haskell program, but Haskell itself is 21 years old, and a very similar language, Miranda, which is certainly vulnerable to the criticisms in this comment, and which was also the ‘future of programming’, is 5 years older than Haskell.
Sorry, it was an extra word. I admit that there is no way to mutate an arbitrary bunch of bytes in an array besides mutating a bunch of bytes in an array. My point was the one rhollerith makes here—frequently you can solve the same kinds of problems with similar performance by using trees instead of arrays.
I disagree regarding your “large dataset’ argument. Very frequently, such problems can be easily solved in a functional language using immutable data structures that share portions of their structure. In your example, if that 10GB array were represented as a functional immutable tree with a high branching factor, you could efficiently copy the bits that changed, and point the rest of the tree back to the original copy, incurring a negligible performance penalty.
If I have a flat 10GB array of bytes, and I want to change an arbitrary sparse subset of those bytes, there’s no way to do that efficiently with pure functional programming.
I don’t know what the word “functional” means in the phrase “functional immutable tree”.
That constitutes an indictment of functional programming only if solving problems in the real world necessitates changing arbitrary sparse subsets of flat (e.g., not tree-like) arrays >10GB in size. If not, you’ve just defended imperative programming by referring to concepts and considerations important only to imperative programming.
I agree with you that Haskell is probably not going to displace imperative programming, but I would give other reasons for my conclusion. (I will speak of Haskell instead of functional languages to avoid tedious discussion of which languages are ‘functional’ languages.) In particular, the Haskell programmer has fewer ways than the imperative programmer has to estimate the resource usage of his programs. In particular, Haskell compilers engage in certain optimizations or collections of optimizations that tend radically to alter the asymptotic time complexity or memory-usage complexity of the program being compiled—and which alterations are made depends on esoteric details of the implementation of the optimizations. One optimization used by Haskell compilers and not used by Java, Lisp, etc, compilers that seems drastically to increase the difficulty of reasoning about the resource usage of Haskell programs is strictness analysis.
One can try to defend Haskell by pointing out that a programmer in this millenium should worry about resource usage only after experience with the program shows that better resource usage is actually required, and at that time, profiling is a more reliable way of finding the places that require optimization than static analysis of the program by the programmer. Well, two responses to that. First, efficiency is often enough a consideration that cannot be ignored that it is genuinely useful for the programmer to have some ways of reasoning about it that do not depend on profiling. Second, the last time I looked in detail at Haskell, although John Hughes’s group was developing some tools for profiling Haskell programs, it is clearly a more difficult task than profiling imperative programs is, and the tools are nowhere near as mature.
I have to add the disclaimer that it has been about 8 years since I stopped reading the main Haskell mailing list, and about 6 years since I wrote my last Haskell program, but Haskell itself is 21 years old, and a very similar language, Miranda, which is certainly vulnerable to the criticisms in this comment, and which was also the ‘future of programming’, is 5 years older than Haskell.
Sorry, it was an extra word. I admit that there is no way to mutate an arbitrary bunch of bytes in an array besides mutating a bunch of bytes in an array. My point was the one rhollerith makes here—frequently you can solve the same kinds of problems with similar performance by using trees instead of arrays.