Rationalism before the Sequences
I’m here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky’s and how his ideas developed.
My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be “where Eliezer got his ideas”; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer’s formative experiences were not unique.
My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics.
My reaction on reading “The Twelve Virtues of Rationality” a few years later was dual. It was a different kind of writing than the book manuscript—stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well.
Today it is probably more difficult to back-read Eliezer’s sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I’m going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism.
Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us! Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice.
Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today’s Internet fora. More often than not, though, the clue would be fictional; somebody’s imagination about what it would be like to increase intelligence, to burn away error and think more clearly.
When I found non-fiction sources on rationality and intelligence increase I devoured them. Alas, most were useless junk. But in a few places I found gold. Not by coincidence, the places I found real value were sources Eliezer would later draw on. I’m not guessing about this, I was able to confirm it first from Eliezer’s explicit reports of what influenced him and then via an email conversation.
Eliezer and I were not unique. We know directly of a few others with experiences like ours. There were likely dozens of others we didn’t know—possibly hundreds—on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn’t that much out there to be mined.
One piece of evidence for this parallelism besides Eliezer’s reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I’ve known personally since the 1970s. Her instant reaction? “Full of stuff I knew already.”
Around the time Nancy and I first met, some years before Eliezer Yudkowsky was born, my maternal grandfather gave me a book called “People In Quandaries”. It was an introduction to General Semantics. I don’t know, because I didn’t know enough to motivate the question when he was alive, but I strongly suspect that granddad was a member of one of the early GS study groups, probably the same one that included Robert Heinlein (they were near neighbors in Southern California in the early 1940s).
General Semantics is going to be a big part of my story. Twelve Virtues speaks of “carrying your map through to reflecting the territory”; this is a clear, obviously intentional callback to a central GS maxim that runs “The map is not the territory; the word is not the thing defined.”
I’m not going to give a primer on GS here. I am going to affirm that it rocked my world, and if the clue in Twelve Virtues weren’t enough Eliezer has reported in no uncertain terms that it rocked his too. It was the first time I encountered really actionable advice on the practice of rationality.
Core GS formulations like cultivating consciousness of abstracting, remembering the map/territory distinction, avoiding the verb “to be” and the is-of-identity, that the geometry of the real world is non-Euclidean, that the logic of the real world is non-Aristotelian; these were useful. They helped. They reduced the inefficiency of my thinking.
For the pre-Sequences rationalist, those of us stumbling around in that fog, GS was typically the most powerful single non-fictional piece of the available toolkit. After the millennium I would find many reflections of it in the Sequences.
This is not, however, meant to imply that GS is some kind of supernal lost wisdom that all rationalists should go back and study. Alfred Korzybski, the founder of General Semantics, was a man of his time, and some of the ideas he formulated in the 1930s have not aged well. Sadly, he was an absolutely terrible writer; reading “Science and Sanity”, his magnum opus, is like an endless slog through mud with occasional flashes of world-upending brilliance.
If Eliezer had done nothing else but give GS concepts a better presentation, that would have been a great deal. Indeed, before I read the Sequences I thought giving GS a better finish for the modern reader was something I might have to do myself someday—but Eliezer did most of that, and a good deal more besides, folding in a lot of sound thinking that was unavailable in Korzybski’s day.
When I said that Eliezer’s sources are probably more difficult to back-read today than they were in 2006, I had GS specifically in mind. Yudkowskian-reform rationalism has since developed a very different language for the large areas where it overlaps GS’s concerns. I sometimes find myself in the position of a native Greek speaker hunting for equivalents in that new-fangled Latin; usually present but it can take some effort to bridge the gap.
Next I’m going to talk about some more nonfiction that might have had that kind of importance if a larger subset of aspiring rationalists had known enough about it. And that is the analytic tradition in philosophy.
I asked Eliezer about this and learned that he himself never read any of what I would consider core texts: C.S. Peirce’s epoch-making 1878 paper “How To Make Our Ideas Clear”, for example, or W.V. Quine’s “Two Dogmas of Empiricism”. Eliezer got their ideas through secondary sources. How deeply pre-Sequences rationalists drew directly from this well seems to be much more variable than the more consistent theme of early General Semantics exposure.
However: even if filtered through secondary sources, tropes originating in analytic philosophy have ended up being central in every formulated version of rationalism since 1900, including General Semantics and Yudkowskian-reform rationalism. A notable one is the program of reducing philosophical questions to problems in language analysis, seeking some kind of flaw in the map rather than mysterianizing the territory. Another is the definition of “truth” as predictive power over some range of future observables.
But here I want to focus on a subtler point about origins rather than ends: these ideas were in the air around every aspiring rationalist of the last century, certainly including both myself and the younger Eliezer. Glimpses of light through the fog...
This is where I must insert a grumble, one that I hope is instructive about what it was like before the Sequences. I’m using the term “rationalist” retrospectively, but those among us who were seeking a way forward and literate in formal philosophy didn’t tend to use that term of ourselves at the time. In fact, I specifically avoided it, and I don’t believe I was alone in this.
Here’s why. In the history of philosophy, a “rationalist” is one who asserts the superiority of a-priori deductive reasoning over grubby induction from mere material facts. The opposing term is “empiricist”, and in fact Yudkowskian-reform “rationalists” are, in strictly correct terminology, skeptical empiricists.
Alas, that ship has long since sailed. We’re stuck with “rationalist” as a social label now; the success of the Yudkowskian reform has nailed that down. But it’s worth remembering that in this case not only is our map not the territory, it’s not even immediately consistent with other equally valid maps.
Now we get to the fun part, where I talk about science fiction.
SF author Greg Bear probably closed the book on attempts to define science fiction as a genre in 1994 when he said “the branch of fantastic literature which affirms the rational knowability of the universe”. It shouldn’t be surprising, then, that ever since the Campbellian Revolution in 1939 invented modern science fiction there has been an important strain in it of fascination with rationalist self-improvement.
I’m not talking about transhumanism here. The idea that we might, say, upload to machines with vastly greater computational capacity is not one that fed pre-Yudkowskian rationalism, because it wasn’t actionable. No; I’m pointing at more attainable fictions about learning to think better, or discovering a key that unlocks a higher level of intelligence and rationality in ourselves. “Ultrahumanist” would be a better term for this, and I’ll use it in the rest of this essay.
I’m going to describe one such work in some detail, because (a) wearing my SF-historian hat I consider it a central exemplar of the ultrahumanist subgenre, and (b) I know it had a large personal impact on me.
“Gulf”, by Robert A. Heinlein, published in the October–November 1949 Astounding Science Fiction. A spy on a mission to thwart an evil conspiracy stumbles over a benign one—people who call themselves “Homo Novis” and have cultivated techniques of rationality and intelligence increase, including an invented language that promotes speed and precision of thought. He is recruited by them, and a key part of his training involves learning the language.
At the end of the story he dies while saving the world, but the ostensible plot is not really the point. It’s an excuse for Heinlein to play with some ideas, clearly derived in part from General Semantics, about what a “better” human being might look and act like—including, crucially, the moral and ethical dimension. One of the tests the protagonist doesn’t know he’s passing is when he successfully cooperates in gentling a horse.
The most important traits of the new humans are that (a) they prize rationality under all circumstances—to be accepted by them you have to retain clear thinking and problem-solving capability even when you’re stressed, hungry, tired, cold, or in combat; and (b) they’re not some kind of mutation or artificial superrace. They are human beings who have chosen to pool their efforts to make themselves more reliably intelligent.
There was a lot of this sort of GS-inspired ultrahumanism going around in Golden Age SF between 1940 and 1960. Other proto-rationalists may have been more energized by other stories in that current. Eliezer remembers and acknowledges “Gulf” as an influence but reports having been more excited by “The World of Null-A” (1946). Isaac Asimov’s “Foundation” novels (1942-1953) were important to him as well even though there was not much actionable in them about rationality at the individual level.
As for me, “Gulf” changed the direction of my life when I read it sometime around 1971. Perhaps I would have found that direction anyway, but...teenage me wanted to be homo novis. More, I wanted to deserve to be homo novis. When my grandfather gave me that General Semantics book later in the same decade, I was ready.
That kind of imaginative fuel was tremendously important, because we didn’t have a community. We didn’t have a shared system. We didn’t have hubs like Less Wrong and Slate Star Codex. Each of us had to bootstrap our own rationality technique out of pieces like General Semantics, philosophical pragmatism, the earliest most primitive research on cognitive biases, microeconomics, and the first stirrings of what became evolutionary psych.
Those things gave us the materials. Science fiction gave us the dream, the desire that it took to support the effort of putting it together and finding rational discipline in ourselves.
Last I’m going to touch on Zen Buddhism. Eliezer likes to play with the devices of Zen rhetoric; this has been a feature of his writing since Twelve Virtues. I understood why immediately, because that attraction was obviously driven by something I myself had discovered decades before in trying to construct my own rationalist technique.
Buddhism is a huge, complex cluster of religions. One of its core aims is the rejection of illusions about how the universe is. This has led to a rediscovery, at several points in its development, of systematic theories aimed at stripping away attachments and illusions. And not just that; also meditative practices intended to shift the practitioner into a mental stance that supports less wrongness.
If you pursue this sort of thing for more than three thousand years, as Buddhists have been doing, you’re likely to find some techniques that actually do help you pay better attention to reality—even if it is difficult to dig them out of the surrounding religious encrustations afterwards.
One of the most recent periods of such rediscovery followed the 18th-century revival of Japanese Buddhism by Hakuin Ekaku. There’s a fascinating story to be told about how Euro-American culture imported Zen in the early 20th century and refined it even further in the direction Hakuin had taken it, a direction scholars of Buddhism call “ultimatism”. I’m not going to reprise that story here, just indicate one important result of it that can inform a rationalist practice.
Here’s the thing that Eliezer and I and other 20th-century rationalists noticed; Zen rhetoric and meditation program the brain for epistemic skepticism, for a rejection of language-driven attachments, for not just knowing that the map is not the territory but feeling that disjunction.
Somehow, Zen rhetoric’s ability to program brains for epistemic skepticism survives not just disconnection from Japanese culture and Buddhist religious claims, but translation out of its original language into English. This is remarkable—and, if you’re seeking tools to loosen the grip of preconceptions and biases on your thinking, very useful.
Alfred Korzybski himself noticed this almost as soon as good primary sources on Zen were available in the West, back in the 1930s; early General Semantics speaks of “silence on the objective level” in a very Zen-like way.
No, I’m not saying we all need to become students of Zen any more than I think we all need to go back and immerse ourselves in GS. But co-opting some of Zen’s language and techniques is something that Eliezer definitely did. And I did, and other rationalists before the Yudkowskian reformation tended to find their way to.
If you think about all these things in combination—GS, analytic philosophy, Golden Age SF, Zen Buddhism—I think the roots of the Yudkowskian reformation become much easier to understand. Eliezer’s quest and the materials he assembled were not unique. His special gift was the same ambition as Alfred Korzybski’s; to form from what he had learned a teachable system for becoming less wrong. And, of course, the intellectual firepower to carry that through—if not perfectly, at least well enough to make a huge difference.
If nothing else, I hope this essay will leave you feeling grateful that you no longer have to do a decades-long bootstrapping process the way Eliezer and Nancy and I and others like us had to in the before times. I doubt any of us are sorry we put in the effort, but being able to shortcut a lot of it is a good thing.
Some of you, recognizing my name, will know that I ended up changing the world in my own way a few years before Eliezer began to write the Sequences. That this ensued after long struggle to develop a rationalist practice is not coincidence; if you improve your thinking hard enough over enough time I suspect it’s difficult to avoid eventually getting out in front of people who aren’t doing that.
That’s what Eliezer did, too. In the long run, I rather hope that his reform movement will turn out to have been more important than mine.
Selected sources follow. The fiction list could have been a lot longer, but I filtered pretty strongly for works that somehow addressed useful models of individual rationality training. Marked with * are those Eliezer explicitly reports he has read.
Huikai, Wumen: “The Gateless Barrier” (1228)
Peirce, Charles Sanders: “How To Make Our Ideas Clear” (1878)
Korzybski, Alfred: “Science and Sanity” (1933)
Chase, Stuart: “The Tyranny of Words” (1938)
Hayakawa, S. I: “Language in Thought and Action” (1939) *
Russell, Bertrand: “A History of Western Philosophy” (1945)
Orwell, George: “Politics and the English Language” (1946) *
Johnson, Wendell: “People in Quandaries: The Semantics of Personal Adjustment” (1946)
Van Vogt, A. E: “The World of Null-A” (1946) *
Heinlein, Robert Anson: “Gulf” (1949) *
Quine, Willard Van Orman: “Two Dogmas of Empiricism” (1951)
Heinlein, Robert Anson: “The Moon Is A Harsh Mistress” (1966) *
Williams, George: “Adaptation and Natural Selection” (1966) *
Pirsig, Robert M.: “Zen and the Art of Motorcycle Maintenance” (1974) *
Benares, Camden: “Zen Without Zen Masters” (1977)
Smullyan, Raymond: “The Tao is Silent” (1977) *
Hill, Gregory & Thornley, Kerry W.: “Principia Discordia (5th ed.)” (1979) *
Hofstadter, Douglas: “Gödel, Escher, Bach: An Eternal Golden Braid” (1979) *
Feynman, Richard: “Surely You’re Joking, Mr. Feynman!” (1985) *
Pearl, Judea: “Probabilistic Reasoning in Intelligent Systems” (1988) *
Stiegler, Marc: “David’s Sling” (1988) *
Zindell, David: “Neverness” (1988) *
Williams, Walter John: “Aristoi” (1992) *
Tooby & Cosmides: “The Adapted Mind: Evolutionary Psychology and the Generation of Culture” (1992) *
Wright, Robert: “The Moral Animal” (1994) *
Jaynes, E.T.: “Probability Theory: The Logic of Science” (1995) *
The assistance of Nancy Lebovitz, Eliezer Yudowsky, Jason Azze, and Ben Pace is gratefully acknowledged. Any errors or inadvertent misrepresentations remain entirely the author’s responsibility.
- “Taking your environment as object” vs “Being subject to your environment” by 11 Apr 2021 22:47 UTC; 86 points) (
- Prizes for the 2021 Review by 10 Feb 2023 19:47 UTC; 69 points) (
- Voting Results for the 2021 Review by 1 Feb 2023 8:02 UTC; 66 points) (
- LW/ACX Saturday (7/15/23) Schelling points on slippery slopes and the prehistory of rationalism by 14 Jul 2023 2:41 UTC; 1 point) (
- 31 Mar 2021 2:48 UTC; 1 point) 's comment on Willa’s Shortform by (
This post was personally meaningful to me, and I’ll try to cover that in my review while still analyzing it in the context of lesswrong articles.
I don’t have much to add about the ‘history of rationality’ or the description of interactions of specific people.
Most of my value from this post wasn’t directly from the content, but how the content connected to things outside of rationality and lesswrong. So, basically, i loved the citations.
Lesswrong is very dense in self-links and self-citations, and to a lesser degree does still have a good number of links to other websites.
However it has a dearth of connections to things that aren’t blog posts—books, essays from before the internet, etc. Especially older writings.
I found this posts citation section to be a treasure trove of things I might not have found otherwise.
I have picked up and skimmed/started at least a dozen of the books on the list.
I still come back to this list sometimes when I’m looking for older books to read.
I really want more things like this on lesswrong.
I like this post for reinforcing a point that I consider important about intellectual progress, and for pushing against a failure mode of the Sequences-style rationalists.
As far as I can tell, intellectual progress is made bit by bit with later building on earlier Sequences. Francis Bacon gets credit for landmark evolution of the scientific method, but it didn’t spring from nowhere, he was building on ideas that had built on ideas, etc.
This says the same is true for our flavor of rationality. It’s built on many things, and not just probability theory.
The failure mode I think this helps with is not thinking that “we are the only sane people”. There is much insanity and we are saner than most, but we are descended from people who are not us, and we probably have relatives we don’t know. And I think that’s worth remembering, thanks to this post for the reminder.