In my experience the sense of Lisp syntax being idiosyncratic disappears quickly, and gets replaced by a sense of everything else being idiosyncratic.
The straightforward prefix notation / Lisp equivalent of return x1 if n = 1 else return x2 is (if (= n 1) x1 x2). To me this seems shorter and clearer. However I admit the clarity advantage is not huge, and is clearly subjective.
(An alternative is postfix notation: ((= n 1) x1 x2 if) looks unnatural, though (2 (3 4 *) +) and (+ 2 (* 3 4)) aren’t too far apart in my opinion, and I like the cause->effect relationship implied in representing “put 1, 2, and 3 into f” as (1 2 3 f) or (1 2 3 -> f) or whatever.)
Note also that since Lisp does not distinguish between statements and values:
you don’t need return, and
you don’t need a separate ternary operator when you want to branch in a value (the x if c else y syntax in Python for example) and for normal if.
I think Python list comprehensions (or the similarly-styled things in e.g. Haskell) are a good example of the “other way” of thinking about syntax. Guido van Rossum once said something like: it’s clearer to have [x for x in l if f(x)] than filter(f, l). My immediate reaction to this is: look at how much longer one of them is. When filter is one function call rather than a syntax-heavy list comprehension, I feel it makes it clearer that filter is a single concept that can be abstracted out.
Now of course the Python is nicer because it’s more English-like (and also because you don’t have to remember whether the f is a condition for the list element to be included or excluded, something that took me embarrassingly long to remember correctly …). I’d also guess that I might be able to hammer out Python list comprehensions a bit faster and with less mental overhead in simple cases, since the order in which things are typed out is more like the order in which you think of it.
However, I do feel the Englishness starts to hurt at some point. Consider this:
[x for y in l for x in y]
What does it do? The first few times I saw this (and even now sometimes), I would read it, backtrack, then start figuring out where the parentheses should go and end up confused about the meaning of the syntax: “x for y in l, for x in y, what? Wait no, x, for y in l, for x in y, so actually meaning a list of every x for every x in every y in l”.
What I find clearer is something like:
(mapcat (lambda (x) x) l)
or
(reduce append l)
Yes, this means you need to remember a bunch of building blocks (filter, map, reduce, and maybe more exotic ones like mapcat). Also, you need to remember which position which argument goes in (function first, then collection), and there are no syntactic signposts to remind you, unlike with the list comprehension syntax. However, once you do:
they compose and mix very nicely (for example, (mapcat f l) “factors into” (reduce append (map f l))), and
there are no “seams” between the built-in list syntax and any compositions on top of them (unlike Python, where if you define your own functions to manipulate lists, they look different from the built-in list comprehension syntax).
I think the last point there is a big consideration (and largely an aesthetic one!). There’s something inelegant about a programming language having:
many ways to write a mapping from values to values, some in infix notation (1+1) and some in prefix notation (my_function(val)), and others even weirder things (x if c else y);
expressions that may either reduce to a value (most things) or then not reduce to a value (if it’s an if or return or so on);
a syntax style you extend in one way (e.g. prefix notation with def my_function(val): [...]) and others that you either don’t extend, or extend in weird ways (def __eq__(self, a, b): [...]).
Instead you can make a programming language that has exactly one style of syntax (prefix), exactly one type of compound expression (parenthesised terms where the first thing is the function/macro name), and a consistent way to extend all the types of syntax (define functions or define macros). This is especially true since the “natural” abstract representation of a program is a tree (in the same way that the “natural” abstract representation of a sentence is its syntax tree), and prefix notation makes this very clear: you have a node type, and the children of the node.
I think the crux is something like: do you prefer a syntax that is like a collection of different tools for different tasks, or a syntax that highlights how everything can be reduced to a tight set of concepts?
Hmm, neither lisp nor python feel natural to me, but I understand that it is just a matter of getting used to. On the other hand, for all JS faults, its style of lambda and filter/map/reduce felt natural to me right away.
In my experience the sense of Lisp syntax being idiosyncratic disappears quickly, and gets replaced by a sense of everything else being idiosyncratic.
The straightforward prefix notation / Lisp equivalent of
return x1 if n = 1 else return x2
is(if (= n 1) x1 x2)
. To me this seems shorter and clearer. However I admit the clarity advantage is not huge, and is clearly subjective.(An alternative is postfix notation:
((= n 1) x1 x2 if)
looks unnatural, though(2 (3 4 *) +)
and(+ 2 (* 3 4))
aren’t too far apart in my opinion, and I like the cause->effect relationship implied in representing “put 1, 2, and 3 into f” as(1 2 3 f)
or(1 2 3 -> f)
or whatever.)Note also that since Lisp does not distinguish between statements and values:
you don’t need
return
, andyou don’t need a separate ternary operator when you want to branch in a value (the
x if c else y
syntax in Python for example) and for normalif
.I think Python list comprehensions (or the similarly-styled things in e.g. Haskell) are a good example of the “other way” of thinking about syntax. Guido van Rossum once said something like: it’s clearer to have
[x for x in l if f(x)]
thanfilter(f, l)
. My immediate reaction to this is: look at how much longer one of them is. Whenfilter
is one function call rather than a syntax-heavy list comprehension, I feel it makes it clearer thatfilter
is a single concept that can be abstracted out.Now of course the Python is nicer because it’s more English-like (and also because you don’t have to remember whether the
f
is a condition for the list element to be included or excluded, something that took me embarrassingly long to remember correctly …). I’d also guess that I might be able to hammer out Python list comprehensions a bit faster and with less mental overhead in simple cases, since the order in which things are typed out is more like the order in which you think of it.However, I do feel the Englishness starts to hurt at some point. Consider this:
What does it do? The first few times I saw this (and even now sometimes), I would read it, backtrack, then start figuring out where the parentheses should go and end up confused about the meaning of the syntax: “x for y in l, for x in y, what? Wait no, x, for y in l, for x in y, so actually meaning a list of every x for every x in every y in l”.
What I find clearer is something like:
or
Yes, this means you need to remember a bunch of building blocks (
filter
,map
,reduce
, and maybe more exotic ones likemapcat
). Also, you need to remember which position which argument goes in (function first, then collection), and there are no syntactic signposts to remind you, unlike with the list comprehension syntax. However, once you do:they compose and mix very nicely (for example,
(mapcat f l)
“factors into”(reduce append (map f l))
), andthere are no “seams” between the built-in list syntax and any compositions on top of them (unlike Python, where if you define your own functions to manipulate lists, they look different from the built-in list comprehension syntax).
I think the last point there is a big consideration (and largely an aesthetic one!). There’s something inelegant about a programming language having:
many ways to write a mapping from values to values, some in infix notation (
1+1
) and some in prefix notation (my_function(val)
), and others even weirder things (x if c else y
);expressions that may either reduce to a value (most things) or then not reduce to a value (if it’s an
if
orreturn
or so on);a syntax style you extend in one way (e.g. prefix notation with
def my_function(val): [...]
) and others that you either don’t extend, or extend in weird ways (def __eq__(self, a, b): [...]
).Instead you can make a programming language that has exactly one style of syntax (prefix), exactly one type of compound expression (parenthesised terms where the first thing is the function/macro name), and a consistent way to extend all the types of syntax (define functions or define macros). This is especially true since the “natural” abstract representation of a program is a tree (in the same way that the “natural” abstract representation of a sentence is its syntax tree), and prefix notation makes this very clear: you have a node type, and the children of the node.
I think the crux is something like: do you prefer a syntax that is like a collection of different tools for different tasks, or a syntax that highlights how everything can be reduced to a tight set of concepts?
Hmm, neither lisp nor python feel natural to me, but I understand that it is just a matter of getting used to. On the other hand, for all JS faults, its style of lambda and filter/map/reduce felt natural to me right away.