Haskell is powerful in the sense that when your program compiles, you get the program that you actually want a much higher probability compared to most other languages. Many stupid mistakes that are runtime errors in other languages are now compile-time errors. Why is almost nobody using Haskell?
Why is there basically no widely used homoiconic language, i.e. a language in which you can use the language itself directly to manipulate programs written in the language.
Here we have some technologies that are basically ready to use (Haskell or Clojure), but people decide to mostly not use them. And with people, I mean professional programmers and companions who make software.
Why did nobody invent Rust earlier, by which I mean a system-level programming language that prevents you from making really dumb mistakes by having the computer check whether you made them?
Why did it take like 40 years to get a latex replacement, even though latex is terrible in very obvious ways?
These things have in common that there is a big engineering challenge. It feels like maybe this explains it, together with that people who would benefit from these technologies where in the position that the cost of creating them would have exceeded the benefit that they would expect from them.
For Haskell and Clojure we can also consider this point. Certainly, these two technologies have their flaws and could be improved. But then again we would have a massive engineering challenge.
“Why is there basically no widely used homoiconic language”
Well, there’s Lisp, in its many variants. And there’s R. Probably several others.
The thing is, while homoiconicity can be useful, it’s not close to being a determinant of how useful the language is in practice. As evidence, I’d point out that probably 90% of R users don’t realize that it’s homoiconic.
I am also not sure how useful it is, but I would be very careful with saying that R programmers not using it is strong evidence that it is not that useful. Basically, that was a bit the point I wanted to make with the original comment. Homoiconicity might be hard to learn and use compared to learning a for loop in python. That might be the reason that people don’t learn it. Because they don’t understand how it could be useful. Probably actually most R users did not even hear about homoiconicity. And if they would they would ask “Well I don’t know how this is useful”. But again that does not mean that it is not useful.
Probably many people at least vaguely know the concept of a pure function. But probably most don’t actually use it in situations where it would be advantageous to use pure functions because they can’t identify these situations.
Probably they don’t even understand basic arguments, because they’ve never heard them, of why one would care about making functions pure. With your line of argument, we would now be able to conclude that pure functions are clearly not very useful in practice. Which I think is, at minimum, an overstatement. Clearly, they can be useful. My current model says that they are actually very useful.
[Edit:] Also R is not homoiconic lol. At least not in a strong sense like lisp. At least what this guy on github says. Also, I would guess this is correct from remembering how R looks, and looking at a few code samples now. In LISP your program is a bunch of lists. In R not. What is the data structure instance that is equivalent to this expression: %sumx2y2% <- function(e1, e2) {e1 ^ 2 + e2 ^ 2}?
And so forth. And of course you can construct that expression bit by bit if you like as well. And if you like, you can construct such expressions and use them just as data structures, never evaluating them, though this would be a bit of a strange thing to do. The only difference from Lisp is that R has a variety of composite data types, including “language”, whereas Lisp just has S-expressions and atoms.
Ok, I was confused before. I think Homoiconicity is sort of several things. Here are some examples:
In basically any programming language L, you can have program A, that can write a file containing a valid L source code that is then run by A.
In some sense, python is homoiconic, because you can have a string and then exec it. Before you exec (or in between execs) you can manipulate the string with normal string manipulation.
In R you have the quote operator which allows you to take in code and return and object that represents this code, that can be manipulated.
In Lisp when you write an S-expression, the same S-expression can be interpreted as a program or a list. It is actually always a (possibly nested) list. If we interpret the list as a program, we say that the first element in the list is the symbol of the function, and the remaining entries in the list are the arguments to the function.
Although I can’t put my finger on it exactly, to me it feels like the homoiconicity is increasing in further down examples in the list.
The basic idea though seems to always be that we have a program that can manipulate the representation of another program. This is actually more general than homoiconicity, as we could have a Python program manipulating Haskell code for example. It seems that the further we go down the list, the easier it gets to do this kind of program manipulation.
A few adjacent thoughts:
Haskell is powerful in the sense that when your program compiles, you get the program that you actually want a much higher probability compared to most other languages. Many stupid mistakes that are runtime errors in other languages are now compile-time errors. Why is almost nobody using Haskell?
Why is there basically no widely used homoiconic language, i.e. a language in which you can use the language itself directly to manipulate programs written in the language.
Here we have some technologies that are basically ready to use (Haskell or Clojure), but people decide to mostly not use them. And with people, I mean professional programmers and companions who make software.
Why did nobody invent Rust earlier, by which I mean a system-level programming language that prevents you from making really dumb mistakes by having the computer check whether you made them?
Why did it take like 40 years to get a latex replacement, even though latex is terrible in very obvious ways?
These things have in common that there is a big engineering challenge. It feels like maybe this explains it, together with that people who would benefit from these technologies where in the position that the cost of creating them would have exceeded the benefit that they would expect from them.
For Haskell and Clojure we can also consider this point. Certainly, these two technologies have their flaws and could be improved. But then again we would have a massive engineering challenge.
“Why is there basically no widely used homoiconic language”
Well, there’s Lisp, in its many variants. And there’s R. Probably several others.
The thing is, while homoiconicity can be useful, it’s not close to being a determinant of how useful the language is in practice. As evidence, I’d point out that probably 90% of R users don’t realize that it’s homoiconic.
I am also not sure how useful it is, but I would be very careful with saying that R programmers not using it is strong evidence that it is not that useful. Basically, that was a bit the point I wanted to make with the original comment. Homoiconicity might be hard to learn and use compared to learning a for loop in python. That might be the reason that people don’t learn it. Because they don’t understand how it could be useful. Probably actually most R users did not even hear about homoiconicity. And if they would they would ask “Well I don’t know how this is useful”. But again that does not mean that it is not useful.
Probably many people at least vaguely know the concept of a pure function. But probably most don’t actually use it in situations where it would be advantageous to use pure functions because they can’t identify these situations.
Probably they don’t even understand basic arguments, because they’ve never heard them, of why one would care about making functions pure. With your line of argument, we would now be able to conclude that pure functions are clearly not very useful in practice. Which I think is, at minimum, an overstatement. Clearly, they can be useful. My current model says that they are actually very useful.
[Edit:] Also R is not homoiconic lol. At least not in a strong sense like lisp. At least what this guy on github says. Also, I would guess this is correct from remembering how R looks, and looking at a few code samples now. In LISP your program is a bunch of lists. In R not. What is the data structure instance that is equivalent to this expression:
%sumx2y2% <- function(e1, e2) {e1 ^ 2 + e2 ^ 2}
?R is definitely homoiconic. For your example (putting the %sumx2y2% in backquotes to make it syntactically valid), we can examine it like this:
And so forth. And of course you can construct that expression bit by bit if you like as well. And if you like, you can construct such expressions and use them just as data structures, never evaluating them, though this would be a bit of a strange thing to do. The only difference from Lisp is that R has a variety of composite data types, including “language”, whereas Lisp just has S-expressions and atoms.
Ok, I was confused before. I think Homoiconicity is sort of several things. Here are some examples:
In basically any programming language L, you can have program A, that can write a file containing a valid L source code that is then run by A.
In some sense, python is homoiconic, because you can have a string and then exec it. Before you exec (or in between execs) you can manipulate the string with normal string manipulation.
In R you have the quote operator which allows you to take in code and return and object that represents this code, that can be manipulated.
In Lisp when you write an S-expression, the same S-expression can be interpreted as a program or a list. It is actually always a (possibly nested) list. If we interpret the list as a program, we say that the first element in the list is the symbol of the function, and the remaining entries in the list are the arguments to the function.
Although I can’t put my finger on it exactly, to me it feels like the homoiconicity is increasing in further down examples in the list.
The basic idea though seems to always be that we have a program that can manipulate the representation of another program. This is actually more general than homoiconicity, as we could have a Python program manipulating Haskell code for example. It seems that the further we go down the list, the easier it gets to do this kind of program manipulation.