In my experience the sense of Lisp syntax being idiosyncratic disappears quickly, and gets replaced by a sense of everything else being idiosyncratic.
The straightforward prefix notation / Lisp equivalent of return x1 if n = 1 else return x2 is (if (= n 1) x1 x2). To me this seems shorter and clearer. However I admit the clarity advantage is not huge, and is clearly subjective.
(An alternative is postfix notation: ((= n 1) x1 x2 if) looks unnatural, though (2 (3 4 *) +) and (+ 2 (* 3 4)) aren’t too far apart in my opinion, and I like the cause->effect relationship implied in representing “put 1, 2, and 3 into f” as (1 2 3 f) or (1 2 3 -> f) or whatever.)
Note also that since Lisp does not distinguish between statements and values:
you don’t need return, and
you don’t need a separate ternary operator when you want to branch in a value (the x if c else y syntax in Python for example) and for normal if.
I think Python list comprehensions (or the similarly-styled things in e.g. Haskell) are a good example of the “other way” of thinking about syntax. Guido van Rossum once said something like: it’s clearer to have [x for x in l if f(x)] than filter(f, l). My immediate reaction to this is: look at how much longer one of them is. When filter is one function call rather than a syntax-heavy list comprehension, I feel it makes it clearer that filter is a single concept that can be abstracted out.
Now of course the Python is nicer because it’s more English-like (and also because you don’t have to remember whether the f is a condition for the list element to be included or excluded, something that took me embarrassingly long to remember correctly …). I’d also guess that I might be able to hammer out Python list comprehensions a bit faster and with less mental overhead in simple cases, since the order in which things are typed out is more like the order in which you think of it.
However, I do feel the Englishness starts to hurt at some point. Consider this:
[x for y in l for x in y]
What does it do? The first few times I saw this (and even now sometimes), I would read it, backtrack, then start figuring out where the parentheses should go and end up confused about the meaning of the syntax: “x for y in l, for x in y, what? Wait no, x, for y in l, for x in y, so actually meaning a list of every x for every x in every y in l”.
What I find clearer is something like:
(mapcat (lambda (x) x) l)
or
(reduce append l)
Yes, this means you need to remember a bunch of building blocks (filter, map, reduce, and maybe more exotic ones like mapcat). Also, you need to remember which position which argument goes in (function first, then collection), and there are no syntactic signposts to remind you, unlike with the list comprehension syntax. However, once you do:
they compose and mix very nicely (for example, (mapcat f l) “factors into” (reduce append (map f l))), and
there are no “seams” between the built-in list syntax and any compositions on top of them (unlike Python, where if you define your own functions to manipulate lists, they look different from the built-in list comprehension syntax).
I think the last point there is a big consideration (and largely an aesthetic one!). There’s something inelegant about a programming language having:
many ways to write a mapping from values to values, some in infix notation (1+1) and some in prefix notation (my_function(val)), and others even weirder things (x if c else y);
expressions that may either reduce to a value (most things) or then not reduce to a value (if it’s an if or return or so on);
a syntax style you extend in one way (e.g. prefix notation with def my_function(val): [...]) and others that you either don’t extend, or extend in weird ways (def __eq__(self, a, b): [...]).
Instead you can make a programming language that has exactly one style of syntax (prefix), exactly one type of compound expression (parenthesised terms where the first thing is the function/macro name), and a consistent way to extend all the types of syntax (define functions or define macros). This is especially true since the “natural” abstract representation of a program is a tree (in the same way that the “natural” abstract representation of a sentence is its syntax tree), and prefix notation makes this very clear: you have a node type, and the children of the node.
I think the crux is something like: do you prefer a syntax that is like a collection of different tools for different tasks, or a syntax that highlights how everything can be reduced to a tight set of concepts?
Hmm, neither lisp nor python feel natural to me, but I understand that it is just a matter of getting used to. On the other hand, for all JS faults, its style of lambda and filter/map/reduce felt natural to me right away.
Maybe prefix notation feels weird because English (and Chinese, and Spanish, and Russian...) follow the subject-verb-object word order. “list.append(3)”, for example, is in SVO order, while “(append list 3)” is in VSO order.
Most languages primarily use either SOV or SVO word-orderings, with VSO trailing at a distant third. Funnily enough, both Classical Arabic and Biblical Hebrew are VSO languages. Looks like God does speak Lisp after all.
This is an interesting point, I haven’t thought about the relation to SVO/etc. before! I wonder whether SVO/SOV dominance is a historical quirk, or if the human brain actually is optimized for those.
The verb-first emphasis of prefix notation like in classic Lisp is clearly backwards sometimes. Parsing this has high mental overhead relative to what it’s expressing:
and in the expressions after ->> the previous expression gets substituted as the last argument to the next.
Thanks to the Lisp macro system, you can write a threading macro even in a Lisp that doesn’t have it (and I know that for example in Racket you can import a threading macro package even though it’s not part of the core language).
I’m the opposite. My first two languages are VSO, so VSO ordering (function first, then arguments) comes naturally to me. Some languages are SOV—Japanese is the most prominent example. Don’t think I know of any proglangs with that form of syntax, though.
In programming, SOV is known as Reverse Polish Notation. First must the arguments come before the operation you write. Forth, Postscript also, such languages are.
I wonder if there is a syntax that feels less idiosyncratic to someone who is used to procedural programming?
There are; there’s a tradeoff. One of the advantages of Lisp is that it’s very easy to parse and manipulate programmatically, because the syntax is so simple and regular.
I don’t know of any syntaxes that achieve both the easy and regular syntax for machine parsing and ‘natural’ syntax for humans.
I wonder if there is a syntax that feels less idiosyncratic to someone who is used to procedural programming?
For example the following feels much more natural than the LISP equivalent. (Or maybe that’s how we ended up with Python.)
In my experience the sense of Lisp syntax being idiosyncratic disappears quickly, and gets replaced by a sense of everything else being idiosyncratic.
The straightforward prefix notation / Lisp equivalent of
return x1 if n = 1 else return x2
is(if (= n 1) x1 x2)
. To me this seems shorter and clearer. However I admit the clarity advantage is not huge, and is clearly subjective.(An alternative is postfix notation:
((= n 1) x1 x2 if)
looks unnatural, though(2 (3 4 *) +)
and(+ 2 (* 3 4))
aren’t too far apart in my opinion, and I like the cause->effect relationship implied in representing “put 1, 2, and 3 into f” as(1 2 3 f)
or(1 2 3 -> f)
or whatever.)Note also that since Lisp does not distinguish between statements and values:
you don’t need
return
, andyou don’t need a separate ternary operator when you want to branch in a value (the
x if c else y
syntax in Python for example) and for normalif
.I think Python list comprehensions (or the similarly-styled things in e.g. Haskell) are a good example of the “other way” of thinking about syntax. Guido van Rossum once said something like: it’s clearer to have
[x for x in l if f(x)]
thanfilter(f, l)
. My immediate reaction to this is: look at how much longer one of them is. Whenfilter
is one function call rather than a syntax-heavy list comprehension, I feel it makes it clearer thatfilter
is a single concept that can be abstracted out.Now of course the Python is nicer because it’s more English-like (and also because you don’t have to remember whether the
f
is a condition for the list element to be included or excluded, something that took me embarrassingly long to remember correctly …). I’d also guess that I might be able to hammer out Python list comprehensions a bit faster and with less mental overhead in simple cases, since the order in which things are typed out is more like the order in which you think of it.However, I do feel the Englishness starts to hurt at some point. Consider this:
What does it do? The first few times I saw this (and even now sometimes), I would read it, backtrack, then start figuring out where the parentheses should go and end up confused about the meaning of the syntax: “x for y in l, for x in y, what? Wait no, x, for y in l, for x in y, so actually meaning a list of every x for every x in every y in l”.
What I find clearer is something like:
or
Yes, this means you need to remember a bunch of building blocks (
filter
,map
,reduce
, and maybe more exotic ones likemapcat
). Also, you need to remember which position which argument goes in (function first, then collection), and there are no syntactic signposts to remind you, unlike with the list comprehension syntax. However, once you do:they compose and mix very nicely (for example,
(mapcat f l)
“factors into”(reduce append (map f l))
), andthere are no “seams” between the built-in list syntax and any compositions on top of them (unlike Python, where if you define your own functions to manipulate lists, they look different from the built-in list comprehension syntax).
I think the last point there is a big consideration (and largely an aesthetic one!). There’s something inelegant about a programming language having:
many ways to write a mapping from values to values, some in infix notation (
1+1
) and some in prefix notation (my_function(val)
), and others even weirder things (x if c else y
);expressions that may either reduce to a value (most things) or then not reduce to a value (if it’s an
if
orreturn
or so on);a syntax style you extend in one way (e.g. prefix notation with
def my_function(val): [...]
) and others that you either don’t extend, or extend in weird ways (def __eq__(self, a, b): [...]
).Instead you can make a programming language that has exactly one style of syntax (prefix), exactly one type of compound expression (parenthesised terms where the first thing is the function/macro name), and a consistent way to extend all the types of syntax (define functions or define macros). This is especially true since the “natural” abstract representation of a program is a tree (in the same way that the “natural” abstract representation of a sentence is its syntax tree), and prefix notation makes this very clear: you have a node type, and the children of the node.
I think the crux is something like: do you prefer a syntax that is like a collection of different tools for different tasks, or a syntax that highlights how everything can be reduced to a tight set of concepts?
Hmm, neither lisp nor python feel natural to me, but I understand that it is just a matter of getting used to. On the other hand, for all JS faults, its style of lambda and filter/map/reduce felt natural to me right away.
Maybe prefix notation feels weird because English (and Chinese, and Spanish, and Russian...) follow the subject-verb-object word order. “
list.append(3)
”, for example, is in SVO order, while “(append list 3)
” is in VSO order.Most languages primarily use either SOV or SVO word-orderings, with VSO trailing at a distant third. Funnily enough, both Classical Arabic and Biblical Hebrew are VSO languages. Looks like God does speak Lisp after all.
This is an interesting point, I haven’t thought about the relation to SVO/etc. before! I wonder whether SVO/SOV dominance is a historical quirk, or if the human brain actually is optimized for those.
The verb-first emphasis of prefix notation like in classic Lisp is clearly backwards sometimes. Parsing this has high mental overhead relative to what it’s expressing:
I freely admit this is more readable:
Clojure, a modern Lisp dialect, solves this with threading macros. The idea is that you can write
and in the expressions after
->>
the previous expression gets substituted as the last argument to the next.Thanks to the Lisp macro system, you can write a threading macro even in a Lisp that doesn’t have it (and I know that for example in Racket you can import a threading macro package even though it’s not part of the core language).
As for God speaking in Lisp, we know that He at least writes it: https://youtu.be/5-OjTPj7K54
Mandatory xkcd’s links: https://xkcd.com/224
I’m the opposite. My first two languages are VSO, so VSO ordering (function first, then arguments) comes naturally to me. Some languages are SOV—Japanese is the most prominent example. Don’t think I know of any proglangs with that form of syntax, though.
In programming, SOV is known as Reverse Polish Notation. First must the arguments come before the operation you write. Forth, Postscript also, such languages are.
There are; there’s a tradeoff. One of the advantages of Lisp is that it’s very easy to parse and manipulate programmatically, because the syntax is so simple and regular.
I don’t know of any syntaxes that achieve both the easy and regular syntax for machine parsing and ‘natural’ syntax for humans.