My blog is here. You can subscribe for new posts there.
My personal site is here.
My X/Twitter is here
You can contact me using this form.
My blog is here. You can subscribe for new posts there.
My personal site is here.
My X/Twitter is here
You can contact me using this form.
This is an interesting point, I haven’t thought about the relation to SVO/etc. before! I wonder whether SVO/SOV dominance is a historical quirk, or if the human brain actually is optimized for those.
The verb-first emphasis of prefix notation like in classic Lisp is clearly backwards sometimes. Parsing this has high mental overhead relative to what it’s expressing:
(reduce +
(filter even?
(take 100 fibonacci-numbers)))
I freely admit this is more readable:
fibonacci-numbers.take(100).filter(is_even).reduce(plus)
Clojure, a modern Lisp dialect, solves this with threading macros. The idea is that you can write
(->> fibonacci-numbers
(take 100)
(filter even?)
(reduce +))
and in the expressions after ->>
the previous expression gets substituted as the last argument to the next.
Thanks to the Lisp macro system, you can write a threading macro even in a Lisp that doesn’t have it (and I know that for example in Racket you can import a threading macro package even though it’s not part of the core language).
As for God speaking in Lisp, we know that He at least writes it: https://youtu.be/5-OjTPj7K54
In my experience the sense of Lisp syntax being idiosyncratic disappears quickly, and gets replaced by a sense of everything else being idiosyncratic.
The straightforward prefix notation / Lisp equivalent of return x1 if n = 1 else return x2
is (if (= n 1) x1 x2)
. To me this seems shorter and clearer. However I admit the clarity advantage is not huge, and is clearly subjective.
(An alternative is postfix notation: ((= n 1) x1 x2 if)
looks unnatural, though (2 (3 4 *) +)
and (+ 2 (* 3 4))
aren’t too far apart in my opinion, and I like the cause->effect relationship implied in representing “put 1, 2, and 3 into f” as (1 2 3 f)
or (1 2 3 -> f)
or whatever.)
Note also that since Lisp does not distinguish between statements and values:
you don’t need return
, and
you don’t need a separate ternary operator when you want to branch in a value (the x if c else y
syntax in Python for example) and for normal if
.
I think Python list comprehensions (or the similarly-styled things in e.g. Haskell) are a good example of the “other way” of thinking about syntax. Guido van Rossum once said something like: it’s clearer to have [x for x in l if f(x)]
than filter(f, l)
. My immediate reaction to this is: look at how much longer one of them is. When filter
is one function call rather than a syntax-heavy list comprehension, I feel it makes it clearer that filter
is a single concept that can be abstracted out.
Now of course the Python is nicer because it’s more English-like (and also because you don’t have to remember whether the f
is a condition for the list element to be included or excluded, something that took me embarrassingly long to remember correctly …). I’d also guess that I might be able to hammer out Python list comprehensions a bit faster and with less mental overhead in simple cases, since the order in which things are typed out is more like the order in which you think of it.
However, I do feel the Englishness starts to hurt at some point. Consider this:
[x for y in l for x in y]
What does it do? The first few times I saw this (and even now sometimes), I would read it, backtrack, then start figuring out where the parentheses should go and end up confused about the meaning of the syntax: “x for y in l, for x in y, what? Wait no, x, for y in l, for x in y, so actually meaning a list of every x for every x in every y in l”.
What I find clearer is something like:
(mapcat (lambda (x) x) l)
or
(reduce append l)
Yes, this means you need to remember a bunch of building blocks (filter
, map
, reduce
, and maybe more exotic ones like mapcat
). Also, you need to remember which position which argument goes in (function first, then collection), and there are no syntactic signposts to remind you, unlike with the list comprehension syntax. However, once you do:
they compose and mix very nicely (for example, (mapcat f l)
“factors into” (reduce append (map f l))
), and
there are no “seams” between the built-in list syntax and any compositions on top of them (unlike Python, where if you define your own functions to manipulate lists, they look different from the built-in list comprehension syntax).
I think the last point there is a big consideration (and largely an aesthetic one!). There’s something inelegant about a programming language having:
many ways to write a mapping from values to values, some in infix notation (1+1
) and some in prefix notation (my_function(val)
), and others even weirder things (x if c else y
);
expressions that may either reduce to a value (most things) or then not reduce to a value (if it’s an if
or return
or so on);
a syntax style you extend in one way (e.g. prefix notation with def my_function(val): [...]
) and others that you either don’t extend, or extend in weird ways (def __eq__(self, a, b): [...]
).
Instead you can make a programming language that has exactly one style of syntax (prefix), exactly one type of compound expression (parenthesised terms where the first thing is the function/macro name), and a consistent way to extend all the types of syntax (define functions or define macros). This is especially true since the “natural” abstract representation of a program is a tree (in the same way that the “natural” abstract representation of a sentence is its syntax tree), and prefix notation makes this very clear: you have a node type, and the children of the node.
I think the crux is something like: do you prefer a syntax that is like a collection of different tools for different tasks, or a syntax that highlights how everything can be reduced to a tight set of concepts?
Since some others are commenting about not liking the graph-heavy format: I really liked the format, in particular because having it as graphs rather than text made it much faster and easier to go through and understand, and left me with more memorable mental images. Adding limited text probably would not hurt, but adding lots would detract from the terseness that this presentation effectively achieves. Adding clear definitions of the terms at the start would have been valuable though.
Rather than thinking of a single example that I carried throughout as you suggest, I found it most useful to generate one or more examples as I looked at each graph (e.g. for the danger-zone graphs, in order: judging / software testing, politics, forecasting / medical diagnosis).
Regarding the end of slavery: I think you make good points and they’ve made me update towards thinking that the importance of materialistic Morris-style models is slightly less and cultural models slightly more.
I’d be very interested to hear what were the anti-slavery arguments used by the first English abolitionists and the medieval Catholic Church (religion? equality? natural rights? utilitarian?).
Which, evidently, doesn’t prevent the usual narrative from being valid in other places, that is, countries in which slavery was still well accepted finding themselves forced, first militarily, then technologically, and finally economically, to adapt or perish.
I think there’s also another way for the materialistic and idealistic accounts to both be true in different places: Morris’ argument is specifically about slavery existing when wage incentives are weak, and perhaps this holds in places like ancient Egypt and the Roman Empire, but had stopped holding in proto-industrial places like 16th-18th century western Europe. However I’m not aware of what specific factor would drive this.
One piece of evidence on whether economics or culture is more important would be comparing how many cases there are where slavery existed/ended in places without cultural contact but with similar economic conditions and institutions, to how many cases there are of slavery existing/ending in places with cultural contact but different economic conditions/institutions.
Thank you for this very in-depth comment. I will reply to your points in separate comments, starting with:
According him, the end of the feudal system in England, and its turning into a modern nation-state, involved among other things the closing off and appropriation, by nobles as a reward from the kingdom, of the former common farmlands they farmed on, as well as the confiscation of the lands owned by the Catholic Church, which for all practical purposes also served as common farmlands. This resulted in a huge mass of landless farmers with no access to land, or only very diminished access, who in turn decades later became the proletarians for the newly developing industries. If that’s accurate, then it may be the case that the Industrial Revolution wouldn’t have happened had all those poor not have existed, since the very first industries wouldn’t have been attractive compared to condition non-forcibly-starved farmers had.
This is very interesting and something I haven’t seen before. Based on some quick searching, this seems to be referring to the Inclosure Acts (which were significant, affecting 1/6th of English land) and perhaps specifically this one, while the Catholic Church land confiscation was the 1500s one. My priors on this having a major effect are somewhat skeptical because:
The general shape of English historical GDP/capita is a slight post-plague rise, followed by nothing much until a gradual rise in the 1700s and then takeoff in the 1800s. Likewise, skimming through this, there seem to be no drastic changes in wealth inequality around the time of the Inclosure Acts, though share of wealth held by the top 10% slightly rise in the late 1700s and personal estates (note: specifically excludes real estate) of farmers and yeomen slightly drop around 1700 before rebounding. Any pattern of more poor farmers must evade these statistics, either by being small enough, or by not being captured in these crude overall stats (which is very possible, especially if the losses for one set of farmers were balanced by gains for another).
Other sources I’ve read support the idea that farmers in general prefer industrial jobs. It’s not just Steven Pinker either; Vaclav Smil’s Energy and Civilization (my review) has this passage:
Moreover, the drudgery of field labor in the open is seldom preferable even to unskilled industrial work in a factory. In general, typical factory tasks require lower energy expenditures than does common farm work, and in a surprisingly short time after the beginning of mass urban industrial employment the duration of factory work became reasonably regulated
It’s probably the case that it’s easier to recruit landless farmers into industrial jobs, and I can imagine plausible models where farmers resist moving to cities, especially for uncertainty-avoidance / risk-aversion reasons. However, the effect of this, especially in the long term, seems limited by things like population growth in (already populous) cities, people having to move off their family farms anyways due to primogeniture, and people generally being pretty good at exploiting available opportunities. An exception might be if early industrialization was tenable only under a strict labor availability threshold that was met only because of the mass of landless farmers created by the English acts.
Thanks for the link to Sarah Constantin’s post! I remember reading it a long time ago but couldn’t have found it again now if I had tried. It was another thing (along with Morris’s book) that made me update towards thinking that historical gender norms are heavily influenced by technology level and type. Evidence that technology type variation even within farming societies had major impacts on gender norms also seems like fairly strong support for Morris’ idea that the even larger variation between farming societies and foragers/industrialists can explain their different gender norms.
John Danaher’s work looks relevant to this topic, but I’m not convinced that his idea of collective/individual/artificial intelligence as the ideal types of future axiology space is cutting it in the right way. In particular, I have a hard time thinking of how you’d summarize historical value changes as movement in the area spanned by these types.
That line was intended to (mildly humorously) make the point that we realise and are aware that there are many other serious risks in the popular imagination. Our central point is that AI x-risk is grand civilisational threat #1, so we wanted to lead with that, and since people think many other things are potential civilisational catastrophes (if not x-risks) we thought it made sense to mention those (and also implicitly put AI into the reference class of “serious global concern”). We discussed, and got feedback from several others, on this opener and while there was some discussion we didn’t see any fundamental problem with it. The main consideration for keeping it was that we prefer specific and even provocative-leaning writing that makes its claims upfront and without apology (e.g. “AI is a bigger threat than climate change” is a provocative statement; if that is a relevant part of our world model, seems honest to point that out).
The general point we got from your comment is that we judged the way the tone of it comes across very wrongly. Thanks for this feedback; we’ve changed it. However, we’re confused about the specifics of your point, and unfortunately haven’t acquired any concrete model of how to avoid similar errors in the future apart from “be careful about the tone of any statements that even vaguely imply something about geopolitics”. (I’m especially confused about how you got the reading that we equated the threat level from Putin and nuclear weapons, and it seems to me that the extent that it is “mudslinging” or “propaganda” seems to be the extent to which acknowledging that many people think Putin is a major threat is either of those things.)
In addition to the general tone, an additional thing we got wrong here was not sufficiently disambiguating between “we think these other things are plausible [or, in your reading, equivalent?] sources of catastrophe, and therefore you need a high bar of evidence before thinking AI is a greater one”, versus “many people think these are more concrete and plausible sources of catastrophe than AI”. The original intended reading was “bold” as in “socially bold, relative to what many people think”, and therefore making points only about public opinion.
Correcting the previous mistake might have looked like:
Based on this feedback, however, we have now removed any comparison or mention of non-AI threats. For the record, the entire original paragraph is: