This might be an unpopular opinion, but I really dislike the term ‘rationalist’ for four reasons:
1) It makes you sound self-aggrandizing. The term gives the impression that you think you are already rational and therefore smarter-than-thou.
2) It’s already a term used to describe a different group. In fact, that way of using the term is not only way older, but also way more common.
3) The people it describes are not only interested in ‘rationality’, but also talk at length about things like AI, Bayes-theorem, Utilitarianism, etc.
4) We don’t even agree on what it means anymore. I’m not sure we ever did, but post-rationality has put the nail in the coffin.
I’ve toyed with introducing the term ‘Aspirationalist’, but maybe we should just split it into ‘Lesswrongers’, ‘SSCers’, ‘Effective Altruist’ etc?
“Rationalism” has the baggage of having meant the idea of finding truth by pure reason, without needing to look at the world.
“Empiricism” has the baggage of having meant the idea of finding truth just by looking at the world, without applying reason to discern its inner structures.
“Bayesianism” is far too narrow.
“Baconianism” might be close enough, but too obscure.
There does not appear to be any word that means “finding the truth by reason and observation, not separate from each other, but different aspects of a single method, as described in the Sequences”, however many of the individual ideas there can be found in sources predating them.
Upvoted for introducing me to the term baconianism, even though it is a little bit off. We could do what every academic and their dog does when they find something they almost agree with and slap a ‘neo-’ in front of it to create e.g neobaconianism. But if we are gonna invent new terms anyway we might as well go with aspirationalist.
I have taken to calling myself a dilettante after someone called me that at an auspicious gathering of thought leaders. I don’t actually know what it means but it sounds very French (that means sophisticated for those of you who don’t know).
I speak French. It means ‘amateur’.
Ah, a person who engages in a profession while unpaid. Why yes, I am also a philanthropist. I applaud my original complimenteur for noticing.
I really really like that you didn’t take that as an insult. I should really start worrying less about offending people on this website. I rewrote that answer like five times wondering if I should make it sound less harsh, but I didn’t and you remained upbeat. The faux-posh writing style also made me grin from ear to ear. I verily say unto thee, Taketh my upvote good sire.
I didn’t notice the faux-posh style until you pointed it out. Thank you for bringing that to my attention.
In English it means a particular kind of amateur: one without commitment, a dabbler, whose knowledge is merely superficial. “Amateur” is also used in the same sense, although it has not entirely lost the meaning of one who engages in something for the love of it, and may be (and occasionally is) the equal of a professional.
Who’s “we”?
I don’t use the term, and don’t generally refer to the groups it might describe. When I need to describe something, I try to consider the specifics of what I’m trying to convey, and to whom—almost always there are better terms for whatever is under discussion. “philosophical techno-nerds” is my current go-to, but “LessWrong participants” is more precise.
[note: yes, that first question is intended to both be a dig at some assumptions AND as a pithy restatement of the primary question of community.]
The difference between people who I would call ratioanlists and “Philosophical techno-nerds” in general is that for rationalists their rationality actually affects the way they act. Rationalists do a lot more sport then your average “philosophical techno-nerd”.
Heh. We may know different slices of those who identify as Rationalists and those we’re calling Philosophical techno-nerds. I agree that there’s probably only about 80% overlap, but I’d say the variance in rigor and in behavior is pretty close to the same in the two groups, just with a wider variety of topics that can be too-narrowly analyzed in the PTNs.
I don’t use it either (obviously, why would I use a term I actively dislike), so this question is actually aimed at the people who do. But saying ‘You should stop calling yourself a rationalist’ is coming on way too strong and I generally try to be nice. Since you already don’t use it you’re not really the target audience, but thanks for commenting anyway because if the term is actually only used by a minority, having that pointed out to them might help retire the term.
I generally using “rationalist” as a short-hand catchall among people who will already know what i’m talking about, ie with my girlfriend or with ppl in ratsphere tumblr. i would never introduce myself to someone outside of the community that way, so maybe i’m also not the target audience for your question.
however, i feel like the minority of people who would self-identify as a “rationalist” to someone decidedly outgroup (hasn’t heard of LessWrong/EY, isn’t interested in EA, consequentialism, etc) is a different problem where the term itself isn’t inherently the problem. people would probably be equally weirded out if you described yourself as a “utilitarian” or “effective altruist” just bc describing ourselves by our philosophies is not super common in the world-at-large.
i do really like the term aspirationalist tho. is it pronounced like aspir-rationalist or aspiration-alist ?
While I do think it’s fine to use them term reactively with people who initiate it, I would still advice against using it proactively because sometimes a person you think is ingroup is actually surprisingly outgroup. My local EA group has a surprising lack of LWers and I would rather we slowly faze-out or replace it’s usage, instead of continuing to use it and increasing the risk of causing unnecessary confusion amongst the pseudo-ingroup.
EDIT: Also saying you’re utilitarian is indeed weird, but it is not actively causing confusion like ‘rationalist’ does, so I would honestly prefer that.
Since this is me throwing a compromise to the folk that do use rationalist, I would suggest pronouncing it like aspir-rationalist.
“LessWrongers” doesn’t sound fancy and Latinate enough to be an intellectual movement. We need something like “error-reductionists”.
Error-reductionism: the idea that error is inevitable, but we’re trying to reduce how much. Probabilism, perpetual beta, and ambition/audacity/grit.
Error-reductionism: the idea that errors are reducible, i.e., explainable in terms of causes and parts such as cognitive biases and bad micro-habits.
Error-reductionism: philosophical reductionism (the world is physical, decomposable, lawful, understandable, not inherently mysterious or magical) combined with an error theory about non-reductionist ideas. We have Bayesianism as a principled (reductive) account of science; we don’t need to call Thor mean names like “meaningless” or say he’s in a separate magisterium from science. We’re allowed to say those ideas were just wrong. We learn about the world by looking at the world and seeing what stuff happens and what methods work — not by applying a priori definitions of “what hypotheses sound sciencey to me”.
Ooh I like that, although it is a bit long and it contains a hyphen. The vocalization is also going to be a bit awkward (too many r’s in a row). We could shorten in to erreductionist since to err means the same thing, but you do lose some clarity.
Something like “Lesswrongers” would be okay for me; at least it is obvious for insiders what it refers to. (For outsiders, there will always be the inferential distance, no matter what label you choose.)
“Effective Altruists” is a specific group I am not a member of, although I sympathize with them. In my opinion, only people who donate (now, not in their imagined future) 10% of their income to EA charities should call themselves this.
“SSCers” on the other hand is too wide to be a meaningful identity, at least for me. It is definitely not a good replacement for what we currently call “rationalists”.
About the objections 3 and 4 -- let’s look at how we historically got where we are. There was a blog by Eliezer Yudkowsky about some ideas that he considered important to make a blog about. It felt like those ideas made a unified whole. Gradually the blog attracted people who were more interested in some subset and less interested in the rest, or in ideas that were related to some subset, etc., and thus the unity was gradually lost. We can still point towards the general values: having true knowledge about the nature of the world and ourselves, improving the world, improving ourselves by overcoming our cognitive shortcomings and generally becoming stronger, individually and also cooperating with each other. There are also people who like to hang out with the crowd despite not sharing all of these values.
EA is more than just giving- people who work careers based on EA principles have every right to call themselves EAs, even if they never donate a single penny
I don’t want to judge individual people, but it is my opinion that many people call themselves EAs although they shouldn’t. This could become a problem in future, if it becomes common knowledge that most “bailey EAs” are actually not “motte EAs”.
If someone is developing a malaria vaccine, it sounds reasonable to consider them an EA even if they don’t donate a penny, because their research can save millions of lives. If someone makes millions without donating, in order to reinvest and make billions, in which case they will donate the billions, it also makes sense to call them an EA (or perhaps “future EA”).
But it is known that people’s values often change as they age. For example, people who in their 20s believe they would sacrifice everything for Jesus (and sign abstinence pledges and whatever), can become atheists in their 30s. In the same way, it is completely plausible that people in their 20s sincerely believe they would totally donate 10% of their future income to EA causes (and sign pledges)… and change their opinion in their 30s when they start having an actual income. I am not saying this will happen to all student EAs, but I am saying it will happen to some. (I would expect the fraction to grow if EA becomes more popular in mainstream, because this feels like something most normies would do without hesitation.)
Thus I am in favor of having a norm “you have to do something (more than merely self-identifying as an EA) to be actually called an EA”. If it depended on me, the norm would be like “actually gives 10% of income, and the income is at least the local minimum wage”. But I am not an EA, so I am just commenting on this as an outsider.
+1 for “Lesswrongers” or “the LessWrong community”
A name for an emergent community is going to have to also be, well, emergent. But you can nudge that emergence in the direction you choose. I think LessWronger is the next natural candidate. I was introduced to a group once as a “LessWronger” even though today is my first time posting or upvoting anything here despite being an avid SSCer for 3 years. I’ve always been aware of LW, and the label would have been OK for me.
Ok, that second suggestion was not: let’s call ourself one of these three things (LW or SSC or EA), I suggested we drop ‘rationalist’ in general and split our community into (these and other) subcommunities. And I’m not sure I agree with you on some terminology either. I would call myself an Effective Altruist even though I don’t donate 10% (I’m a studying ethics to work for EA later), because I’m on the giving what we can pledge and I’m active in my local EA community.
And EY’s blog was never as coherent as people say it was. But lets be extremely charitable and cut away all his other interest in AI, economics etc and only talk about: 1) having accurate beliefs and 2) making good decisions. For one this is so vague its almost meaningless and secondly even that is not coherent because those two things are in conflict. The first is the philosophy of realism and the second is pragmatism, two irreconcilable philosophies. I’ve always dropped realism in favor of pragmatism and apparently that makes me a post-rationalist now? Do people realize that you can’t always do both?
Commented on EA under sibling comment. Sorry, it wasn’t meant as a personal attack, although it probably seems so. Sorry again.
From my perspective, the narrative behind the Sequences was like this: “The superhuman artificial intelligence could easily kill us all, for reasons that have nothing to do with Terminator movies, but instead are like Goodhart’s law on steroids. It would require extraordinary work to create an intelligence that has human-compatible values and doesn’t screw up things on accident. Such work would require smart people who have unconfused thinking about human values and intelligence. Unfortunately, even highly intelligent people get easily confused about important things. Here is why people are naturally so confused, and here is how to look at those important things properly. (Here is some fictional evidence about doing rationality better.)”
To me it seems that pragmatism without accurate beliefs is a bit like running across a minefield. You are so fast that you leave all the losers behind. Then something unexpected happens and you die. (Metaphorically speaking, unless you are Steve Jobs.) A certain fraction of people survives the minefield, and then books and movies are made celebrating their strategy; failing to mention the people who used the same strategy and died. To me it seems like an open question whether such strategy is actually better on average. (Though maybe this is just my ignorance speaking.)
In real life, many people who try to have accurate beliefs are failing, often for predictable reasons. So, maybe this whole project is indeed as doomed as you see it. But maybe there are other factors. For example, both “trying to have accurate beliefs” and “failing at life” could be statistical consequences of being on the autistic spectrum. In that case, if you already happen to be on the spectrum, you cannot get rid of the bad consequences by abandoning the desire to have accurate beliefs. Another possible angle is that “trying to have accurate beliefs” is most fruitful when you associate with people who have the same values. Most of human knowledge is a result of collaboration. In such case, creating a community of people who share these values is the right move.
I don’t want to go too deep in “the true X has never been tried yet” territory, but to me LW-style rationality seems like rather new project, which could possibly bring new fruit. (The predecessors in the same reference class are, I suppose, General semantics and Randian objectivism.) So maybe there is a way to success that doesn’t involve self-deception. At least for myself, I don’t see a better option. But this may be about my personality, so I don’t want to generalize to other people. Actually, it seems like for most people, LW-style rationality is not an option.
I suppose my point is that Less Wrong philosophy—the attempt to reconcile search for truth with winning at life—is a meaningful project; although maybe only for some kinds of people (not meant as a value judgment, but: different personality types exist and different strategies work for them).
It didn’t, because you couldn’t even if you wanted to. You don’t know me personally so why would I assume you were attacking me personally? I was merely trying to state a terminological disagreement in an attempt to change the readers hidden inference.
This is not what philosophical pragmatism is about. With pragmatism you learn what is useful which in 99.999% of cases will be the thing thats accurate. Note that I said:
But philosophy is all about the edge cases. What do you do when there is knowledge that is dangerous for humanity’s survival? Do you learn things that are probably memetic hazards? Realism says ‘yes’, Pragmatism says ‘no’. Pragmatism is about ‘winning’, realism is about ‘truth’. If somehow you can show that these clearly opposed philosophies are actually reconcilable you will win all the philosophy awards. Until that time, I choose winning.
OK, thanks for explanation. The part about avoiding memetic hazards… seems like a valuable thing to do, but also seems to me that in practice most attempts to avoid memetic hazards have second-order effects. (Obvious counter-argument: if there are successful cases of avoiding memetic hazards that do not have side effects, I would probably not know about them. An important part of keeping a secret is never mentioning that there is a secret.)
But this would be a debate for another day. Maybe an entire field of research: how to communicate infohazards. (If you found it, there is a chance other people will, too. How can you decrease that probability, without doing things that will likely blow back later.)
In the meanwhile, if in most cases the accurate thing is the useful thing, and if we don’t know how to handle the remaining cases, I feel okay going for the accurate thing. (This is probably easier for me, because I personally don’t do anything important on a large scale, so I don’t have to worry about accidentally destroying humanity.)
Salutations! I arrived here by Googling. I got stuck on what to tag my notes with since “rationlism” is taken for group 2. I am grabbing “aspirationlism” as the tag for my zettlekasten. Two thumbs up for “aspirationalism”!
To address 2) specifically, I would say that philosophical “Rationalists” are a wider group but they would generally include the kind of philosophical views that most people on e.g. LW hold, or at least they include a pathway to reaching those view.
See the philsophers listed in the wikipedia article for example:
Pythagoras—foundation for mathematical inquiry into the world and mathematical formalism creating in general
Plato—foundation for “modern” reasoning and logic in general, with a lot of ***s
Aristotle -- (outdated) foundation for observing the world and creating theories and taxonomies. The fact that he’s mostly “wrong” about everything and the “wrongness” is obvious also gets you 1⁄2 of the way to understand Kuhn
René Descartes—“questioning” more fundamental assumptions that e.g. Socrates would have had problems seeing as assumptions. Also foundational for modern mathematics.
Baruch Spinoza—I don’t feel like I can summarize why reading “Spinoza” leads one to the LW-brand of rationalism. I think it boils down to this obsession with internal consistency and his obsession to burn any bridge for the sake of reaching a “correct” conclusion.
Gottfried Leibniz—I mean, personally, I hate this guys. But it seems to me that the interpretations of physics that I’ve seen around here, and also those that important people in the community (e.g. Eliezer and Scott) use are heavily influenced by this work. Also arguably one of the earliest people to build computers and think about them so there’s that.
Immanuel Kant—Arguably introduced the Game Theoretical view to the world. Also helped correcting/disproving a lot of biased reasoning in philosophy that leads to e.g. arguments for the existence of good based on linguistic quirks.
I think, at least in regards to philosophy until Kant, if one were to read philosophy following this exact chain of philosopher, they would basically have a very strong base from which to approach/develop rationalist thought as seemingly espoused by LW.
So in that sense, the term “Rationalist” seems well fitting if wanting to describe “The general philosophical direction” most people here are coming from.
Looking at the listed philosophers is not the best way to understand what’s going on here. The category of rationalists is not “philosophers like those guys,” it is one of a pair of opposed categories (the other being the empiricists) into which various philosophers fit to varying degrees. It is less appropriate for the ancients than for Descartes, Spinoza, and Leibniz (those three are really the paradigm rationalists). And the wikipedia article is taking a controversial position in putting Kant in the rationalist category. Kant was aware of the categories (indeed, is a major source of the tradition of grouping philosophers into those two categories), and did not consider himself to belong to either of them (his preferred terms for the categories were “dogmatists” for the rationalists and “skeptics” for the empiricists, which is probably enough on its own to give you a sense for how he viewed the two groups). There is admittedly a popular line of Kant interpretation which reads him as a kind of crypto-rationalist, but there are also those of us who read him as a crypto-empiricist, and not a few who take him at his word as being outside both categories.
In any event, the empiricist tradition has at least as much, if not more, influence on the LW wrong crowd as the rationalist tradition, and really both categories work best for early moderns and aren’t fantastic for categorizing most in the present era. So anybody familiar with the philosophical term is likely to find the application to this community initially confusing.
Great comment. I would just like to add that Kant killed/unified Empiricism and Rationalism and after Kant the terms quickly started the fizzle out.
Seems like Kant killed it by naming it.
.
The Tao that can be named is not the eternal Tao;
because the later philosophers will call themselves “post-Taoists”.
Cool quote, but in this case probably not accurate. From wikipedia:
From etymonline:
EDIT: There is some debate as to when “modern” use of the term empiricism started, the first use was at least much much older. Stanford.edu writes:
EDIT 2: [emphasis added]
Quinean naturalism used to be used to describe which corner of philosphy LW tends to inhabit. Also Wittgensteinian in understanding of language (Human’s Guide to Words etc).