Some friends and I were discussing “what makes one a rationalist”, which is a less strict criteria than “what makes one a member of the rationalist community”, which is a less strict criteria still than “what makes one a high status, well liked, or enthusiastic member of the rationalist community”, and I am most interested in the first question.
It seems obvious to me that 99% of people you meet on the street are very clearly (and mostly joyously) not rationalists. But what is it they do or don’t do that makes them so obviously not members?
It also seems to be a trope that people will say they are not rationalists, but will also read all the literature of rationalists, agree with all the thought processes and intellectual rules of engagement. This seems to be very common. As I was listening to an episode of the Bayesian Conspiracy (can’t recall which one) and hearing a guest start to quibble that they are just “rationalist adjacent” but not a real rationalist, I started to realize that my own reasons at that time for why I wasn’t one were similar, and further realized how absurd and in denial they sounded. Some people call that group “post-rats” but to me that seems not like an exit from the group, nor even a scism, but just a church of the same theology adopting different social norms.
From what I was told LessOnline was hosting at the same time as a major EA event, and some even joked that LessOnline was allowed to have meat catered because the die hard EA rationalists had gone to that instead of LessOnline. I met at guy at a co-working space in the Presidio during a short day trip out during the week of LessOnline, and he mentioned that he liked that “rationalists are getting back to a more pure version of themselves” and ditching the requirement to be an EA.
All of this made me think that rationalism is basically about deriving “Is” statements, and moral offshoots are about “Ought” statements. And that one can be a complete rationalist without being an EA or utilitarian.
So I wanted to extend my conversation with my friend to the community.
Here are the things Claude thought might make one a rationalist:
(I have bolded the only ones that I think are strict requirements to be a rationalist)
Curiosity about discovering truth
Willingness to change mind when presented with evidence
Interest in improving reasoning and decision-making skills
Familiarity with key rationalist concepts (e.g. Bayesian reasoning, cognitive biases)
Engagement with rationalist-adjacent communities (LessWrong, EA, etc.)
Belief in the importance of AI alignment research
Acceptance of scientific consensus on major issues
Openness to unconventional ideas if well-argued
Interest in futurism and emerging technologies
Commitment to intellectual honesty
Skepticism towards traditional religious beliefs
Utilitarian-leaning ethical framework
Belief in moral uncertainty
Interest in optimizing charitable giving
Familiarity with rationalist fiction (e.g. HPMOR)
Openness to cryonics and life extension
Belief in the many-worlds interpretation of quantum mechanics
Polyamory or openness to non-traditional relationship structures
Interest in nootropics and human enhancement
Belief in the importance of existential risk reduction
It seems that my view is that curiosity is basically the only criteria I think distinguishes a rationalist from a non-rationalist. Jurgen Schmidhuber has written a lot on dissecting what makes a mind curious, that might help us break down the term.
Artificial Curiosity & Creativity Since 1990-91
Given his theories always end up going back to something about compression, and given that my personal mission is omniscience, I wonder if my view is that rationalist are basically just people who are also ultimately seeking omniscience (maximum possible compression of all data).
Data Analysis
I decided to do some data analysis on some survey data from Aella’s blog.
I will be updating this as I get more data. Here is the difference in distribution of answers for people who answered in the highest affirmative to “I like the forum “LessWrong”″ compared to the rest of the respondents.
Notice any interesting divergences? Below is how LessWrong fanatics differ most from Aella’s other respondents.
Anyone excited to see where they fall on the manifold of rationalist adjacents?
Interesting sections of the manifold discovered, I think I will make a higher quality visualization as well as justification for the lines later.
I’ll give a somewhat aspirational answer:
A (local-definition) rationalist is someone who takes an interest in how the mind works and how it errs, and then takes seriously that this applies to themselves.
Oxford languages (or really just after googling) says “rational” is “based on or in accordance with reason or logic.”
I think there are a lot of other types of definitions (I think lesswrong mentioned it is related to the process of finding truth). For me, first of all it is useful to break this down into two parts: 1) observation and information analysis, and 2) decision making.
For 1): Truth, but also particularly causality finding. (Very close to the first one you bolded, and I somehow feel many other ones are just derived from this one. I added causality because many true observations are not really causality).
For 2): My controversial opinion is everyone are probably/usually “rationalists”—just sometimes the reasonings are conscious, and other times they are sub/un-conscious. These reasonings/preferences are unique to each person. It would be dangerous in my opinion if someone try to practice “rationality” based on external reasonings/preferences, or reasonings/preferences that are only recognized by the person’s conscious mind (even if a preference is short term). I think a useful practice is to 1. notice what one intuitively want to do vs. what one think they should do (or multiple options they are considering), 2. ask why there is the discrepancy, 3. at least surface the unconscious reasoning, and 4. weigh things (the potential reasonings that leads to conflicting results, for example short term preference vs long term goals) out.