I’ve never really understood the “rationalists don’t win” sentiment. The people I’ve met who have LW accounts have all seemed much more competent, fun, and agenty than all of my “normal” friends (most of whom are STEM students at a private university).
I should note that rationalists aren’t really making new and innovative discoveries (the non-superstar ones anyway)
There have been plenty of Gwern-style research posts on LW, especially given that writing research posts of that nature is quite time-consuming.
I went to an LW meetup once or twice. With one exception the people there seemed less competent and fun than my university friends, work colleagues, or extended family, though possibly more competent than my non-university friends.
I have the opposite experience! Most people at LW meetups I’ve been to have tended to be succesful programmers or people with or working on stuff like math phds. Generally more socially awkward but that’s not a great proxy for “competence” in this kind of crowd.
Do you think this was caused by their rationality? It seems more likely to me that these people are drawn to rationality because it validates how they already think.
What you just said doesn’t make sense. “Rationality”, as formally defined by this community, refers to “doing well” (which I contest, but whatever); Therefore, the question is not “was it caused by their rationality”, but “was it caused by a lack of rationality”, or perhaps “Was their lack of rationality caused by using LW techniques?”.
Defining rationality as winning is useless in most discussions. Obviously what I was referring to is rationality as defined by the community, EG “extreme epistemic rationality”.
The community defines rationality as epistemic rationality AND as winning, and not noticing the difference between the two leeds to the idea that rationalists ought to win at everything...that the winningness of instrumental rationality. and the universality of ER can be combined.
There are plenty of researchers who have never heard of lesswrong, but who manage to produce good work
in fields lesswrong respects. So have they...
A over come their lack of rationality,
..or...
B learnt rationality somewhere else?
And, if the answer is B, what is LW adding to rationality? The assumption that rationality will make you good at range of things that aren’t academic or technical?
The answer is, of course, (B), but what LW adds is a common vocabulary and a massive compilation of material in one place. Most people who learn how to think from disparate sources have a hard time codifying what they understand or teaching it to others. Vocabulary and discourse help immensely with that.
So, for instance, I can neatly tell people, “Reversed stupidity is not intelligence!”, and thus save myself incredible amounts of explanation about how real life issues are searches through spaces for tiny sub-spaces encoding solutions to your problem, and thus “reversing” some particularly bad solution hasn’t done any substantial work locating the sub-space I actually wanted.
LW vocabulary relabels a lot of traditional rationality terms.
Has anyone put together a translation dictionary? Because it seems to me that most of the terms are the same, and yet it is common to claim that relabeling is common without any sort of quantitative comparison.
Thanks for this. Let me know if you have any others and I will add them to this wiki page I created: Less Wrong Canon on Rationality.
Here are some more that I already had.
I am amused by this section of Anti-Inductiveness in this context, though:
Not that this is standard terminology—but perhaps “efficient market” doesn’t convey quite the same warning as “anti-inductive”. We would appear to need stronger warnings.
It was many times debated on LW whether LW needlessly invents new words for already existing terms, or whether the new words label things that are not considered elsewhere.
I don’t remember the outcomes of those debates. It seems to me they usually went like this:
“LW invents new words for many things that already have standard names.” “Can you give me five examples?” ”What LW calls X is called Y everywhere else.” (provides only one example) ”Actually X is not the same concept as Y.” “Yes it is.” ”It is not.” ...
So I guess at the end both sides believe they have won the debate.
I just ran into this one because it became used in a reddit thread: in this post Eliezer uses the term “catgirl” to mean a non-sentient sexbot. While that isn’t a traditional rationality term, I think it fits the spirit of the question (and predictably, many people responded to the Reddit thread using the normal meaning of “catgirl” rather than Eliezer’s.)
Another problem of LessWrong is that its isolationism represents a self-made problem (unlike demographics). Despite intense philosophical speculation, the users tend towards a proud contempt of mainstream and ancient philosophy[39] and this then leads to them having to re-invent the wheel. When this tendency is coupled with the metaphors and parables that are central to LessWrong’s attraction, it explains why they invent new terms for already existing concepts.[40] The compatibilism position on free will/determinism is called “requiredism”[41] on LessWrong, for example, and the continuum fallacy is relabeled “the fallacy of gray.” The end result is a Seinfeldesque series of superfluous neologisms.
In my view, RationalWiki cherry picks certain LessWrongers to bolster their case. You can’t really conclude that these people represent LessWrong as a whole. You can find plenty of discussion of the terminology issue here, for example, and the way RationalWiki presents things makes it sound like LessWrongers are ignorant. I find this sort of misrepresentation to be common at RationalWiki, unfortunately.
Their approach reduces to an anti-epistemic affect-heuristic, using the ugh-field they self-generate in a reverse affective death spiral (loosely based on our memeplex) as a semantic stopsign, when in fact the Kolmogorov distance to bridge the terminological inferential gap is but an epsilon.
Yes, absolutely. Aren’t there a bunch of examples of STEM PhD’s having crazy beliefs like “evolution isn’t real”?
Actually, that example in particular was about a context that is outside the laboratory, but I also think that people can be irrational inside the laboratory, but still successful. Ex. they might do the right things for the wrong reasons. Ex. “this is just the way you’re supposed to do it”. That approach might lead to success a lot of the time, but it isn’t a true model of how the world works.
(Ultimately, the point I’m making is essentially the same as that Outside The Laboratory post.)
Yes, absolutely. Aren’t there a bunch of examples of STEM PhD’s having crazy beliefs like “evolution isn’t real”?
Not a very big bunch.
Since (instrumental) rationality is winning, and these people are winning within their own domains, they are being instrumental rationalists within them. So the complaint that they are not rational enough amounts to the complain that they are not epistemic rationalists, ie they don’t care enough about truth outside their domains. But why should they? Epistemic rationality doesn’t deliver the goodies, in terms of winning .. that’s the message of your OP, or it would be if you distinguished ER and IR.
Those who value truth for its own sake, the Lovers of Wisdom, will want to become epistemic rationalists, and may well acquire the skills without MIRI or CFAR’s help, since epistemic rationality is not a new invention. The rest have the problem that they are not motivated, not the problem that they are not skilled (or, rather, that they lack the skills needed to do things they are not motivated to do).
I can see the attraction in “raising the rationality waterline” , since people aren’t completely pigeonholed into domains, and do make decisions about wider issues, particularly when they vote.
MIRI conceives of raising the rationality waterline in terms of teaching skills, but if it amounts to supplementing IR with ER, and it seems that it does, then you are not going to do it without making it attractive. If you merge ER and IR, then it looks like raising the waterline could lead to enhanced winning, but that expectation just leads to the disappointment you and others have expressed. That cycle will continue until “rationalists” realise rationality is more than one thing.
Or, you could just assume that it wouldn’t make sense for Adam Zerner to define winning as failing, so he was referring to rationality as the set of skills that LW teaches.
“Since (instrumental) rationality is winning, and these people are winning within their own domains, they are being instrumental rationalists within them.”
Just assume Adam Zerner wasn’t talking about instrumental rationality in this case.
Heck, I’ll bite the bullet and say that applied science, and perhaps engineering, owe more to ‘irrational’ people. (Not a smooth bullet, to be sure.)
My grandfather worked with turbines. He viewed the world in a very holistic manner, one I can’t reconcile with rationalism no matter how hard I try. He was an atheist who had no problems with his children being Christians (that I know of). His library was his pride; yet he had made no provisions for its fate after his death. He quitted smoking after his first heart attack, but went on drinking. He thought electrons’ orbits were circular, and he repaired circuitry often. He preferred to read a book during dinners than to listen to us talk, to teach us table manners.
And from what I heard, he was not half bad at engineering.
I’ve never really understood the “rationalists don’t win” sentiment. The people I’ve met who have LW accounts have all seemed much more competent, fun, and agenty than all of my “normal” friends (most of whom are STEM students at a private university).
There have been plenty of Gwern-style research posts on LW, especially given that writing research posts of that nature is quite time-consuming.
I went to an LW meetup once or twice. With one exception the people there seemed less competent and fun than my university friends, work colleagues, or extended family, though possibly more competent than my non-university friends.
That was also true for me until I moved to the bay. I suspect it simply doesn’t move the needle much, and it’s just a question of who it attracts.
I have the opposite experience! Most people at LW meetups I’ve been to have tended to be succesful programmers or people with or working on stuff like math phds. Generally more socially awkward but that’s not a great proxy for “competence” in this kind of crowd.
Do you think this was caused by their rationality? It seems more likely to me that these people are drawn to rationality because it validates how they already think.
What you just said doesn’t make sense. “Rationality”, as formally defined by this community, refers to “doing well” (which I contest, but whatever); Therefore, the question is not “was it caused by their rationality”, but “was it caused by a lack of rationality”, or perhaps “Was their lack of rationality caused by using LW techniques?”.
Defining rationality as winning is useless in most discussions. Obviously what I was referring to is rationality as defined by the community, EG “extreme epistemic rationality”.
The community defines rationality as epistemic rationality AND as winning, and not noticing the difference between the two leeds to the idea that rationalists ought to win at everything...that the winningness of instrumental rationality. and the universality of ER can be combined.
There are plenty of researchers who have never heard of lesswrong, but who manage to produce good work in fields lesswrong respects. So have they...
A over come their lack of rationality,
..or...
B learnt rationality somewhere else?
And, if the answer is B, what is LW adding to rationality? The assumption that rationality will make you good at range of things that aren’t academic or technical?
The answer is, of course, (B), but what LW adds is a common vocabulary and a massive compilation of material in one place. Most people who learn how to think from disparate sources have a hard time codifying what they understand or teaching it to others. Vocabulary and discourse help immensely with that.
So, for instance, I can neatly tell people, “Reversed stupidity is not intelligence!”, and thus save myself incredible amounts of explanation about how real life issues are searches through spaces for tiny sub-spaces encoding solutions to your problem, and thus “reversing” some particularly bad solution hasn’t done any substantial work locating the sub-space I actually wanted.
It only creates a common vocabulary amongst a subculture. LW vocabulary relabels a lot of traditional rationality terms.
Has anyone put together a translation dictionary? Because it seems to me that most of the terms are the same, and yet it is common to claim that relabeling is common without any sort of quantitative comparison.
Huh, lemme do it.
Schelling fence → bright-line rule
Semantic stopsign → thought-terminating cliché
Anti-inductiveness → reverse Tinkerbell effect
“0 and 1 are not probabilities” → Cromwell’s rule
Tapping out → agreeing to disagree (which sometimes confuses LWers when they take the latter literally (see last paragraph of linked comment))
ETA (edited to add) → PS (post scriptum)
That’s off the top of my head, but I think I’ve seen more.
Thanks for this. Let me know if you have any others and I will add them to this wiki page I created: Less Wrong Canon on Rationality. Here are some more that I already had.
Fallacy of gray → Continuum fallacy
Motivated skepticism → disconfirmation bias
Marginally zero-sum game → arms race
Weirdness points --> idiosyncrasy credits
Funging Against → Considering the alternative
Akrasia → Procrastination/Resistance
Belief in Belief → Self-Deception
Ugh Field ->Aversion to (I had a better fit for this but I can’t think of it now)
Thanks for the list!
I am amused by this section of Anti-Inductiveness in this context, though:
Instrumental/terminal = hypothetical/categorical
rationalist taboo = unpacking.
Instrumental and terminal are pretty common terms. I’ve seen them in philosophy and business classes.
It was many times debated on LW whether LW needlessly invents new words for already existing terms, or whether the new words label things that are not considered elsewhere.
I don’t remember the outcomes of those debates. It seems to me they usually went like this:
“LW invents new words for many things that already have standard names.”
“Can you give me five examples?”
”What LW calls X is called Y everywhere else.” (provides only one example)
”Actually X is not the same concept as Y.”
“Yes it is.”
”It is not.”
...
So I guess at the end both sides believe they have won the debate.
I just ran into this one because it became used in a reddit thread: in this post Eliezer uses the term “catgirl” to mean a non-sentient sexbot. While that isn’t a traditional rationality term, I think it fits the spirit of the question (and predictably, many people responded to the Reddit thread using the normal meaning of “catgirl” rather than Eliezer’s.)
Previously.
RationalWiki discusses a few:
In my view, RationalWiki cherry picks certain LessWrongers to bolster their case. You can’t really conclude that these people represent LessWrong as a whole. You can find plenty of discussion of the terminology issue here, for example, and the way RationalWiki presents things makes it sound like LessWrongers are ignorant. I find this sort of misrepresentation to be common at RationalWiki, unfortunately.
Their approach reduces to an anti-epistemic affect-heuristic, using the ugh-field they self-generate in a reverse affective death spiral (loosely based on our memeplex) as a semantic stopsign, when in fact the Kolmogorov distance to bridge the terminological inferential gap is but an epsilon.
You know you’ve been reading Less Wrong too long when you only have to read that comment twice to understand it.
I got waaay too far into this before I realized what you were doing… so well done!
What are you talking about?
I’m afraid I don’t know what you mean by Kolmogorov distance.
Well yes. And I fully support LW moving towards more ordinary terminology. But it’s still good to have someone compiling it all together.
I feel rather confident in saying that it’s (A). I think that domain specific knowledge without rationality can actually lead to a lot of success.
You actually think someone irrational can do maths, science or engineering?
Yes, absolutely. Aren’t there a bunch of examples of STEM PhD’s having crazy beliefs like “evolution isn’t real”?
Actually, that example in particular was about a context that is outside the laboratory, but I also think that people can be irrational inside the laboratory, but still successful. Ex. they might do the right things for the wrong reasons. Ex. “this is just the way you’re supposed to do it”. That approach might lead to success a lot of the time, but it isn’t a true model of how the world works.
(Ultimately, the point I’m making is essentially the same as that Outside The Laboratory post.)
Not a very big bunch.
Since (instrumental) rationality is winning, and these people are winning within their own domains, they are being instrumental rationalists within them. So the complaint that they are not rational enough amounts to the complain that they are not epistemic rationalists, ie they don’t care enough about truth outside their domains. But why should they? Epistemic rationality doesn’t deliver the goodies, in terms of winning .. that’s the message of your OP, or it would be if you distinguished ER and IR.
Those who value truth for its own sake, the Lovers of Wisdom, will want to become epistemic rationalists, and may well acquire the skills without MIRI or CFAR’s help, since epistemic rationality is not a new invention. The rest have the problem that they are not motivated, not the problem that they are not skilled (or, rather, that they lack the skills needed to do things they are not motivated to do).
I can see the attraction in “raising the rationality waterline” , since people aren’t completely pigeonholed into domains, and do make decisions about wider issues, particularly when they vote.
MIRI conceives of raising the rationality waterline in terms of teaching skills, but if it amounts to supplementing IR with ER, and it seems that it does, then you are not going to do it without making it attractive. If you merge ER and IR, then it looks like raising the waterline could lead to enhanced winning, but that expectation just leads to the disappointment you and others have expressed. That cycle will continue until “rationalists” realise rationality is more than one thing.
Or, you could just assume that it wouldn’t make sense for Adam Zerner to define winning as failing, so he was referring to rationality as the set of skills that LW teaches.
It’s not like winning <--> losing is the only relevant taxis.
It’s definitely relevant to this reasoning:
“Since (instrumental) rationality is winning, and these people are winning within their own domains, they are being instrumental rationalists within them.”
Just assume Adam Zerner wasn’t talking about instrumental rationality in this case.
Heck, I’ll bite the bullet and say that applied science, and perhaps engineering, owe more to ‘irrational’ people. (Not a smooth bullet, to be sure.)
My grandfather worked with turbines. He viewed the world in a very holistic manner, one I can’t reconcile with rationalism no matter how hard I try. He was an atheist who had no problems with his children being Christians (that I know of). His library was his pride; yet he had made no provisions for its fate after his death. He quitted smoking after his first heart attack, but went on drinking. He thought electrons’ orbits were circular, and he repaired circuitry often. He preferred to read a book during dinners than to listen to us talk, to teach us table manners.
And from what I heard, he was not half bad at engineering.