What is the next level of rationality?
This is part 1 of our dialogue series on the question “What is the next level of rationality?”.
Yudkowsky published Go Forth and Create the Art! in 2009. It is 2023. You and I agree that, in the last few years, there haven’t been many rationality posts on the level of Eliezer Yudkowsky (and Scott Alexander). In other words, nobody has gone forth and created the art. Isn’t that funny?
What Came Before Eliezer?
Yes, we agreed on that. I remarked that there were a few levels of rationality before Eliezer. The one directly before him was something like the Sagan-Feynman style rationality (who’s fans often wore the label “Skeptics”). But that’s mostly tangential to the point.
Or perhaps it’s not tangential to the point at all. Feynman was referenced by name in Harry Potter and the Methods of Rationality. I have a friend in his 20s who is reading Feynman for the first time. He’s discovering things like “you don’t need a labcoat and a PhD to test hypotheses” and “it’s okay to think for yourself”.
How do you see it connecting to the question “What’s the next level of rationality?”
Yudkowsky is a single datapoint. The more quality perspectives we have about what “rationality” is, the better we can extrapolate the fit line.
I see, so perhaps a preliminary to this discussion is the question “which level of rationality is Eliezer’s?”?
Yeah. Eliezer gets extra attention on LessWrong, but he’s not the only writer on the subject of rationality. I think we should start by asking who’s in this cluster we’re pointing at.
Alright, so in the Feynman-Sagen cluster, I’d also point to Dawkins, Michael Shermer, Sam Harris, Hitchens, and James Randi, for example. Not necessarily because I’m very familiar with their works or find them particularly valuable, but because they seem like central figures in that cluster.
Those are all reasonable names, but I’ve never actually read any of their work. My personal list include Penn Jillette. Paul Graham and Bryan Caplan feel important too, even though they’re not branded “skeptic” or “rationality”.
I’ve read a bit, but mostly I just came late enough to the scene and found Eliezer and Scott quickly enough that I didn’t get the chance to read them deeply before then, and after I did I didn’t feel the need.
Yep, and Paul Graham is also someone Eliezer respects a lot, and I think might have even been mentioned in the sequences. I guess you could add various sci-fi authors to the list.
Personally, I feel the whole thing started with Socrates. However, by the time I got around to cracking open The Apology, I felt like I had already internalized his ideas.
But I don’t get that impression when I hang out with Rationalists. The median reader of Rationality: A-Z shatters under Socratic dialogue.
I agree, though if we’re trying to cut the history of rationality in periods/levels, then Socrates is a different (the first) period/level (Though there’s a sense in which he’s been at a higher level than many who came after him).
I think Socrates’ brilliance came from realizing how little capacity to know they had at the time, and fully developing the skill of not fooling himself. What others did after him was develop mostly the capacity to know, while mostly not paying as much attention to not fooling themselves.
I think the “Skeptics” got on this journey of thinking better and recognizing errors, but were almost completely focused on finding them in others. With Yudkowsky the focus shifted inward in a very Socratic manner, to find your own faults and limitations.
Tangent about Trolling as a core rationality skill
I’ve never heard the word “Socratic” used in that way. I like it.
Another similarity Yudkowsky has to Socrates is that they’re both notorious trolls.
That made me laugh. It’s true. I remember stories from the Sequences of Dialogues he had with people who he basically trolled.
And there’s a good reason for it. Trolling your students is absolutely necessary when teaching rationality. I troll my students/friends all the time. When I visited the Lightcone offices in Berkeley, I trolled them too.
Ah, I see that you have written about this.
Do you know why trolling is so important?
I’m not sure I understand exactly how you use the concept, so tell me why you think It’s so important.
I could explain this in simple words. But I think it would be more fun and more educational if I trolled you instead. Are you okay with that?
Haha, sure :)
You’ve convinced me. Trolling is unethical. Rationalist teachers shouldn’t do it. Let’s move on.
lol, I didn’t say anything so I couldn’t have convinced you of anything :)
Perhaps you’ve convinced yourself, but I bet you haven’t and you’re just trolling :)
(:
A rationalist must be skeptical of authority. Suppose you are a teacher of rationality, and therefore an authority figure. How do you ethically teach your students to be skeptical of you?
As an educator I do think about that a lot. On the one hand I want to tell the students everything I know that would be useful for them to know too, on the other hand I want to account for the possibility that I’m wrong, so I need to develop their ability to scrutinize what I say and check if it’s actually true.
So some teachers solve this by sacrificing either the first or the second part, because doing both of them well is harder, and that’s unfortunate.
When I was in school I had a teacher who was very good at combining these. She’d start a topic by giving us a passionate speech, which made us care and told us what she believes, but then she made us dig into the subject and read various reports and come to our own conclusions. And it worked, many students did come to different conclusions.
I also think back to ‘My Favorite Liar’, where a teacher planted a falsehood in every lecture and told the students about it, so they would scrutinize his lectures to find the intentional error, but in the process also doubt and scrutinize everything else. And I guess you can call that a kind of trolling.
Good. Very good!
That is indeed a kind of trolling. After all, when a teacher is about to deceive you, she/he always lets you know in advance that you are about to encounter misinformation. That’s how you know when you need to be skeptical.
Do you understand?
I think so. I suggested to specify “intentionally deceive you” and you rejected that. And I thought, but how can he let you know he’s going to deceive you if he’s not doing so intentionally? But since he might deceive you unintentionally all the time, then he has to let you know in advance that you might be deceived and should be skeptical. Is that the idea?
[Note to readers: There’s a feature in the LessWrong dialogue interface where Yoav can suggest a change to what I wrote. Yoav did so. I rejected the change.]
That is the idea.
Back to “What’s the next level of rationality?”
Getting back to our original question, “What’s the next level of rationality? [after Eliezer]”, one of the (many) things he didn’t get around to writing about is how important it is for rationalists to troll each other.
Feynman was a troll too, by the way.
Absolutely, even more so than Eliezer, I think. “Surely You’re Joking, Mr. Feynman” is one of the funniest books I’ve read.
It’s hilarious.
Besides the importance of trolling, what are some other facets of rationality that Eliezer never got around to writing about?
Well, I think the best place to start with is the preface Eliezer wrote in 2015 to ’Rationality: A-Z”, where he lists 5 overarching errors he made in the sequences:
Not writing with the intention of helping people do better in their everyday lives, instead of helping them solve big, difficult, important problems
Focusing too much on how to learn the theory and not enough on how to practice it.
Focusing too much on rational belief, too little on rational action.
Not organized the content in the sequences well (Things are much better now with the new sequences and the LW wiki)
Speaking plainly about the stupidity of what appeared to be stupid ideas, Instead of writing more courteously.
I think the first 3 are relevant to our discussion.
Some other points I’d add (some practical, some foundational/theoretical)
The sequences and most of LW thereafter focused mainly on how to be more rational as an individual, and not on how to collaborate as rationalists or be more rational as a pair or a group.
It overlooked the value of information in tradition. (things like the Lindy Principle, Chesterton’s fence, etc)
Related, it overlooked how many things, like certain biases, may actually be rational when analyzed better or considering our limitations.
It’s based on Bayesianism, which is a bit like General Relativity in that we know it’s very much correct, but not fully, and there’s something after it that should be even more correct than it. With Bayesianism the problem is that it assumes Logical Omniscience and observing the world from outside.
Most of the foundational problems pointed out in the sequences — anthropic reasoning, reflective reasoning, strange loop circularity — haven’t been solved. And though these aren’t very relevant in day-to-day life, because they either don’t come up or we have an intuition for the answer, these sure would be nice to solve, and it would show that rationality has firm foundations, for those who care about such things.
These have all been addressed to some degree after the sequences were written, of course, so this is not new or anything, and many of these are in the “water supply” to a degree (especially the value of information in tradition).
But it shows that we have no rationalist canon that actually encompasses modern rationalist thought, which is something we would need to foster a new phase/level of rationality.
This is very helpful. You’re pointing at topics I’ve wanted to write about, but have been unsure of how to approach. For example, I want to write a post about the benefits of hypocracy. (Most religious people are hypocrites. If you cure the hypocracy, some they may turn toward rationality, but others just end up as fundamentalist extremists.) It falls under your “overlooked value of tradition” umbrella.
But I think the most promising point might be “how to collaborate as rationalists or be more rational as a pair or a group”. This wasn’t so important when Eliezer was starting. After all, there was little community to coordinate. But I’ve been doing many Socratic dialogues, and often the first thing I have to do is teach my partner how to have a Socratic dialogue.
That connects to “helping people do better in their everyday lives” and “[f]ocusing too much on how to learn the theory and not enough on how to practice it” too.
Yes, it was one of the first things that I wanted to write about on LW (I have a draft on pair rationality from January 2020), but I didn’t feel I have a lot to say about it and I didn’t have anyone else in my personal life who’s as interested in rationality as me (still don’t), so I didn’t have the opportunity to develop that part on my own.
It’s pretty hard to develop the art of Socratic dialogue on your own. 😛😛
I’ve got a lot to say about Socratic dialogues but, as you pointed out, my writing is often very difficult for people to interact with.
I think the root problem is that when I’m writing for an abstract audience, I’m awful at guessing what readers will and won’t understand. That’s why I like these dialogues so much. I can just ask “Do you understand?”
And it’s working, I’m experiencing none of the difficulties I tend to experience with your writing.
Then perhaps the next step of this rationality project is for you and me to do a Socratic dialogue about “how to do a Socratic dialogue”.
Alright, that sounds good. Let’s pick it up from there next time :)
(:
Meta comment about the dialogue feature:
This was the first time I used the dialogue feature and it was a blast (much better experience than comment threads). Being able to see what the other person is writing as they write it, suggest edits, and swap things around is such a great user experience, and is so much closer to talking than any other form of written communication I used thus far. I kinda wish I had the option to use this format in each of my chats (Whatsapp, Discord, etc..).
I loved how this allowed the conversation to be free-flowing, and took us on interesting tangents that we probably wouldn’t have gone on otherwise. OTOH, this might make it worse to read. I personally haven’t found any dialogue great to read yet, and it might be related to this quality, but it seems they are definitely great to have. So perhaps what’s needed is just to go the extra step and distill the dialogue afterward.
Two other points:
One thing I noticed is that we very often wrote meta notes that we later deleted, and it may be nice to have a box on the side for meta discussion, so you can keep the main thread clean.
I think it would also be nice if we could do inline reacts while editing, to be easily able to mark agreement on something (Like you would nod your head or go “aha” in the middle of a sentence to show that you agree).
I strongly agree with this. I have also not found any dialogue great to read, and that is definitely because of this exact quality.
That is definitely needed, but “just” is very much the wrong word to use here. Distilling a dialogue would end up providing most of the value to readers—much more value than the un-distilled dialogue. Unfortunately, it would also require considerable effort from the dialogue participants. It would, after all, be much like writing a regular post…
It’s possible that with the dialogue written, a well prompted LLM could distill the rest. Especially if each section that was distilled could be linked back to the section in the dialogue it was distilled from.
Sure, it’s possible. I don’t trust LLMs nearly enough to depend directly on such a thing in a systematic way, but perhaps there could be a workflow where the LLM-generated summary is then fed back to the dialogue participants to sign off on. That might be a very useful thing for either the LW team or some third party to build, if it worked.
Remember to link to this feedback on Intercom, to increase the chance that the LW team sees it.
Thanks for the reminder, I will :)
I think there have been some attempts to describe a further level of rationality. They just haven’t taken off.
http://bewelltuned.com/ has been the most useful to me. Per bit, I’d say I prefer it to the Sequences. Though it is incomplete. Sadly, the author commited suicide after doing some crazy things to themselves. Raemon, who knows more of the details of their suicide than I do says their suicide wasn’t really related to the content of BeWellTuned (see the comments on this post).
I’ve been impressed by what little of LoganStrohl’s work on naturalism I’ve read. It also seems like it’d mesh nicely with some BWT techniques that I’ve been practicing.
And Cedric Chin has made great strides in improving his own instrumental rationality, especially in regards to business expertise. I found his notes on the literature on expertise to be very useful for some research I did, and because it changed refined my understanding of how people get good at things.
ctrl-f korz
hmmm
To explain: Alfred Koryzbski, the guy behind General Semantics, is basically “rationality from 100 years ago”. (He lived 1879-1950.) He’s ~2 generations before Feynman (1918-1988), who was ~one before Sagan (1934-1996), then there’s a 2-3 generation gap to Yudkowsky (1979-). (Of course if you add more names to the list, the gaps disappear; reordering your list, you get James Randi (1928-2020), Dawkins (1941-), Hitchens (1949-2011), Michael Shermer (1954-), and Sam Harris (1967-) which takes you from Feynman to Yudkowsky, basically.)
He features in Rationalism before the Sequences, and is interesting both because 1) you can directly read his stuff, like Science and Sanity, and 2) most of his stuff has already made it to you indirectly, from the student’s students. (Yudkowsky apparently wrote the Sequences before reading any Korzybski directly, but read lots of stuff written by people who read Korzybski.)
There are, of course, figures before Korzybski, but I think the gaps get larger / it becomes less obviously “rationalism” instead of something closer to “science”.
Ah, of course!
Yeah, if we went for a full history of rationality we definitely would have mentioned him. We haven’t because I don’t think he had much of an influence over the “Skeptics” brand of rationality, which we talked about as the popular form of rationality before Eliezer. I think one of the things that distinguished Eliezer’s form of rationality was that he integrated Korzybski’s ideas into it.
So nobody’s interested in backtracking and fixing problems with the old stuff?
I don’t think, like, re-editing AI to Zombies once again is valuable.
I do think, like, “come up with your own n virtues of rationality” is a good exercise. I think destruction & resynthesis could be more fruitful
The problem here, I think, that there is no new level of rationality in a sense of qualitative change. Eliezer wrote down his knowledge, some unanswered questions, his tentative answers, went forth and created some of the Art, like functional decision theory. The rest is just continuation of this work.
I’m currently writing a series of posts on anthropic reasoning with the ultimate goal of solving it once and for all.
How do you imagine a satisfying solution? What are the problems you would like to be addressed and questions to be answered?
Likewise, what are the issues with reflective reasoning strange loop circularity?
I saw your series and I’m happy you’re working on it. Unfortunately I’m not well versed enough in the subject (or probability in general) to say what a satisfying solution would look like or what exactly are the problems and questions I would like to be addressed and answered. For the same reason I’m also not really able to evaluate your work. I wish it got more attention from people who are more well versed in it.
Reflective Reasoning is something Eliezer and others wrote a lot about. “Strage loop circularity” is my name for something Eliezer gestured at a few times, which he called “Strange loops through the meta level”. In Where Recursive Justification Hits Bottom he justifies using Induction to justify induction and Occam’s razor to justify Occam’s razor, and says that it seems to him like it should be possible to formalize something that allows you to make valid “circular” reasoning like this, but still prevents invalid circular reasoning. I share his intuition, but don’t have the capability to solve the problem. But if it is solved then it solves the Münchhausen trilemma, which is quite an annoying thorn.
Oh, so it was what I was thinking. Yeah, I’ve just been explaining how it all makes sense to a person on Astral Codex. I think Eliezer mostly solved Münchhausen trilemma in the very same essay, or at least provided crucial insight for it. But an accurate and detailed explanation definetely wouldn’t harm. As soon as I finished with anthropics, I’ll try to provide it.
I think saying he “mostly solved” it goes too far, even he says so. But I definitely agree he provided crucial insight for it. I think I also added a bit in this comment.
Awesome. I hope people pay attention.
Btw here are the posts I can find where he talks about this:
Where Recursive Justification Hits Bottom
My Kind of Reflection
Fundamental Doubts
You Provably Can’t Trust Yourself
And here he mentions it but doesn’t talk primarily about it:
Setting Up Metaethics
“Arbitrary”
Mirrors and Paintings
Is Fairness Arbitrary?
The Meaning of Right
I’m really interested in a new Sequences. I don’t think it would even be that hard to do, it’s just not a thing that most rationalists find interesting in contrast to whatever else they’re doing.
I think that the next level after Eliezer would be the additions added by Scott Alexander. His most well-known posts are pretty much cannon at this point.
This addresses (1) with the reviews of Seeing as a State and The Secret of Our Success.
Well, yes and no. The Secret of Our Success was indeed one of the things I thought about when I wrote that some of this has been addressed. But a handful of blog posts on this one problem don’t constitute a new level (a paradigm, if you wish). Most of his other posts that became canon don’t really go out of Eliezer’s paradigm, they just expand it incredibly well.
We will know we’ve fully entered the new level/paradigm when we have a new Canon that answers all of these questions (and probably a few more) to some degree of completeness (having a canon also points to the need to have a certain level of consensus and common knowledge). The new level of rationality will be as distinct from Eliezer’s level as Eliezer’s level was distinct from the Feynman-Sagen level.
I think the informational value of tradition, and the progress-conservation tension, is indeed where we came farthest, and we mostly just need to collect everything that was written and distill it so it can become part of a future canon. After that, I think we came farthest on having an improved understanding of biases, but there’s still some distance to go.
Other than that, I think we’re quite far from a satisfying answer to the other problems, and so we’re quite far from fully entering the next level.
What do you think of David Chapman’s stuff? I’m thinking of his curriculum sketch in particular.
I don’t think most rationalists were very excited by it though, e.g. Scott’s brief look at it in 2013 (and David’s response downthread) and an old comment thread I can no longer find between David and Kaj Sotala.
I don’t plan to read David Chapman’s writings. His website is titled “Meta-rationality”. When I’m teaching rationality, one of the first things I have to do is tell students repeatedly is to stop being meta.
Empiricism is about reality. “Meta” is at least one step away from reality, and therefore at least one step farther from empiricism.
Telling people to stop being meta is very important, but I think you may be misunderstanding the way in which Chapman is using the term. AFAICT it’s really more about being able to step back from your own viewpoint and assumptions and effectively apply a mental toolbox and different mental stances effectively to a problem that isn’t trivial or already-solved. Personally I’ve found it has helped keep me from going too meta in a lot of cases, by re-orienting my thinking to what’s needed.
Chapman’s old work programming Pengi with Phil Agre at the MIT AI Lab seems to suggest otherwise, but I respect your decision to not read his writings, since they mirror mine after attempting to and failing to grok him.