LessWrong’s focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary. I understand that that’s how the organization got its start, and it’s fine. But when people here tie rationality to being part of that subset, or to high-IQ in general, it seems a bit silly (I also find the near-obsession with IQ a bit unsettling).
If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?
Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren’t new inventions. Any decent therapist will also try to get people to see the “outside view” of their actions. Transhumanism and x-risk probably wouldn’t belong in this book. Bayesian reasoning and cognitive fallacies have plenty of popular descriptions around them.
Effective altruism doesn’t need to be tied to utilitarianism or terms like QALYs. Look at the way the Gates Foundation describes its work, for instance.
The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn’t hostile and doesn’t provoke instant dismissal by those who most need that alternative.
Am I missing anything here? Is there some large component of rationalism that can’t be severed from the way it’s packaged on this site and sites like it?
For all the emphasis on Slytherin-style interpersonal competence (not so much on the main site anymore, but it’s easy to find in the archive and in Methods), LW’s historically had a pretty serious blind spot when it comes to PR and other large-scale social phenomena. There’s probably some basic typical-minding in this, but I’m inclined to treat it mostly as a subculture issue; American geek culture has a pretty solid exceptionalist streak to it, and treats outsiders with pity when it isn’t treating them with contempt and suspicion. And we’re very much tied to geek culture. I’ve talked to LWers who don’t feel comfortable exercising because they feel like it’s enemy clothing; if we can’t handle something that superficial, how are we supposed to get into Joe Sixpack’s head?
Ultimately I think we focus on contrarian technocrat types, consciously or not, because they’re the people we know how to reach. I include myself in this, unfortunately.
I would also note that often when people DO think about marketing LW, they speak about the act of marketing with outright contempt. Marketing is just a set of methodologies to draw attention to something. As a rationalist, one should embrace that tool for anything they care about rather than treating it as vulgar.
how are we supposed to get into Joe Sixpack’s head?
A better question is what exactly we are supposed to do inside Joe Sixpack’s head?
Make him less stupid? No one knows how. Give him practical advice so that he fails less epically? There are multiple shelves of self-help books at B&N, programs run by nonprofits and the government, classes at the local community college, etc. etc. Joe Sixpack shows very little interest in any of those I don’t see why the Sequences or some distillation of them would do better.
To be fair, it might have some merit if we were literally talking about the average person, though I’m far from certain; someone buys an awful lot of mass-market self-help books and I don’t think it’s exclusively Bay Aryans. But I was using “Joe Sixpack” there in the sense of “someone who is not a geek”, or even “someone who isn’t part of the specific cluster of techies that LW draws from”, and there should be plenty of smart, motivated, growth-oriented people within that set. If we can’t speak to them, that’s entirely on us.
Nah, just plain-vanilla arrogance :-D I am not quite sure I belong to the American geek culture, anyway.
But I was using “Joe Sixpack” there in the sense of “someone who is not a geek”, or even “someone who isn’t part of the specific cluster of techies that LW draws from”
Ah. I read “Joe Sixpack” as being slightly above “redneck” and slightly below “your average American with 2.2 children”.
So do you mean people like engineers, financial quants, the Make community, bright-eyed humanities graduates? These people are generally not dumb. But I am still having trouble imagining what would you want to do inside their heads.
So do you mean people like engineers, financial quants, the Make community, bright-eyed humanities graduates? These people are generally not dumb. But I am still having trouble imagining what would you want to do inside their heads.
The first group of people I thought of was lawyers, who have both a higher-than-average baseline understanding of applied cognitive science and a strong built-in incentive to get better at it. I wouldn’t stop there, of course; all sorts of people have reasons to improve their thinking and understanding, and even more have incentives to become more instrumentally effective.
As to what we’d do in their heads… same thing as we’re trying to do in ours, of course.
same thing as we’re trying to do in ours, of course.
Um. Speaking for myself, what I’m trying to do in my own head doesn’t really transfer to other heads, and I’m not trying to do anything (serious) inside other people’s heads in general.
The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn’t hostile and doesn’t provoke instant dismissal by those who most need that alternative.
The hardline secularism (which might be better described as a community norm of atheism, given that some of the community favors creating community structures which take on the role of religious participation,) isn’t a prerequisite so much as a conclusion, but it’s one that’s generally held within the community to be pretty basic.
However, so many of the lessons of epistemic rationality bear on religious belief that not addressing the matter at all would probably smack of willful avoidance.
In a sense, rationality might function as an alternative to mysticism. Eliezer has spoken for instance about how he tries to present certain lessons of rationality as deeply wise so that people will not come to it looking for wisdom, find simple “answers,” and be tempted to look for deep wisdom elsewhere. But there’s another very important sense where, if you treat rationality like mysticism, the result is that you’ll completely fuck up at rationality, and get a group that worships some “rational” sounding buzzwords without gaining any useful insight into reasoning.
Keep in mind that insofar as Less Wrong has educational goals, it’s not trying to reach as wide an audience as possible, it’s trying to teach as many people as possible to get it right. If “reaching” an audience means instilling them with some memes which don’t have much use in isolation, while leaving out important components of rationality, that measure has basically failed.
One would expect an alternative to a thing to share enough characteristics with the thing to make it an alternative.
Turkey is an alternative to chicken. Ice cream is not. Teaching rationality through stories and deep-wisdom tropes is an alternative to teaching mysticism through stories and deep-wisdom tropes. Teaching rationality through academic papers is not.
If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?
More simple language, many examples, many exercises.
And then the biggest problem would be that most people would just skip the exercises, remember some keywords, and think that it made them more rational.
By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn’t care enough to really do the rationality stuff.
Which means that the book should start with some motivating examples. But even that has limited effect.
I believe there is a huge space for improvement, but we shouldn’t expect magic even with the best materials. There is only so much even the best book can do.
Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren’t new inventions.
The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time. Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis.
The difficult part is to avoid self-deception. These millenia old materials seem quite bad at it. Maybe they were best of what was available at their time. But that’s not enough. Archimedes could have been the smartest physicist of his time, but he still didn’t invent relativity. Being “best” is not enough; you have to do things correctly.
By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn’t care enough to really do the rationality stuff.
Okay, this is true. But LessWrong is currently a set of articles. So the medium is essentially unchanged, and any of these criticisms apply to the current form. And how many people do you think the article on akrasia has actually cured of akrasia?
The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time.
First of all, I’m mainly dealing with the subset of material here that deals with self-knowledge. Even if you disagree with “millenia old”, do you disagree with “any decent therapist would try to provide many/most of these tools to his/her patients”?
On the more scientific side, the idea of optimal scientific inquiry has been refined over the years, but the core of observation, experimentation and modeling is hardly new either.
Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis.
I do not see what you mean here. Nobody at LW has invented rationality, become a beisutsukai master, built a Friendly AI or Started the singularity. Freud correctly realized the importance the subconscious has in shaping our behavior, and the fact that it is shaped by past experiences in ways not always clear to us. He then failed to separate this knowledge from some personal obsessions. We wouldn’t expect any methods of rationality to turn Freud into a superhero, we’d expect it to help people reading him separate the wheat from the chaff.
And also an e-book (which is probably not finished yet, last mention here), that is still just a set of articles, but they are selected, reordered, and the comments are removed—which is helpful, at least for readers like me, because when I read the web, I cannot resist reading the comments (which together can be 10 times as long as the article) and clicking hyperlinks, but when I read the book, I obediently follow the page flow.
A good writer could then take this book as a starting point, and rewrite it, with exercises. But for this we need a volunteer, because Eliezer is not going to do it. And the volunteer needs to have some skills.
And how many people do you think the article on akrasia has actually cured of akrasia?
Akrasia survey data analysis. Some methods seem to work for some people, but no method is universally useful. The highest success was “exercise to increase energy” and even that helped only 25% of people; and the critical weakness seems to be that most people think it is a good idea, but don’t do it. To overcome this, we would need some off-line solutions, like exercising together. (Or maybe a “LessWrong Virtual Exercise Hall”.)
do you disagree with “any decent therapist would try to provide many/most of these tools to his/her patients”?
Yes, I do. Therapists don’t see teaching rationality as their job (although it correlates), wouldn’t agree with some parts of our definitions of rationality (many of them are religious, or enjoy some kind of mysticism), and would consider some parts too technical and irrelevant for mental health (Bayes Rule, Solomonoff Prior, neural networks...).
But when you remove the technical details, what is left is pretty much “do things that seem reasonable”. Which still would be a huge improvement for many people.
On the more scientific side, the idea of optimal scientific inquiry has been refined over the years, but the core of observation, experimentation and modeling is hardly new either.
That’s the theory. Now look at the practice of… say, medicine. How much of it really is evidence-based, and how much of that is double-blind with control group and large enough sample and meta-analysis et cetera? When you start looking at it closely, actually very little. (If you want a horror story, read about Ignaz Semmelweis, who discovered how to save lifes of thousands of people and provided hard evidence… and how the medical community rewarded him.)
Okay, this is true. But LessWrong is currently a set of articles. So the medium is essentially unchanged, and any of these criticisms apply to the current form.
LessWrong activity seems to shift more into meatspace as time goes on.
We have the study hall for people with akrasia that provides different help then just reading an article about akrasia.
CFAR partly did grow out of LW and they hold workshops.
LessWrong’s focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary.
I don’t understand what this means. LW is composed mostly from people of these backgrounds. Are you saying that this a problem?
But when people here tie rationality to being part of that subset, or to high-IQ in general, it seems a bit silly.
If by rationality you mean systematic winning (where winning can be either truth seeking (epistemic rationality) or goal achieving (instrumental rationality)) then no one is claiming that we have a monopoly on it. But if by rationality, you are referring to the group of people who have decided to study it and form a community around it, then yes most of us are high IQ and in technical fields. And if you think this is a problem, I’d be interested in why.
I also find the near-obsession with IQ a bit unsettling
In other words, my opponent believes something which is kind like being obsessed with it, and obsession is bad. If you have a beef with a particular view or argument then say so.
Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren’t new inventions
Eliezer has responded to this (very common) criticism here
I don’t know why you want LW to be packaged to a wide audience. I suspect this would do more harm than good to us, and to the wider audience. It would harm the wider audience because of the sophistication bias, which would cause them to mostly look for errors in thinking in others and not their own thinking. It takes a certain amount of introspectiveness (which LW seems to self-select for) not to become half-a-rationalist.
I don’t understand what this means. LW is composed mostly from people of these backgrounds. Are you saying that this a problem?
If it creates an exclusionary atmosphere, or prevents people outside that group from reading and absorbing the ideas, or closes this community to outside ideas, then yes. But mostly I think that focusing on presenting these ideas only to that group is unnecessary.
If by rationality you mean systematic winning (where winning can be either truth seeking (epistemic rationality) or goal achieving (instrumental rationality)) then no one is claiming that we have a monopoly on it. But if by rationality, you are referring to the group of people who have decided to study it and form a community around it, then yes most of us are high IQ and in technical fields. And if you think this is a problem, I’d be interested in why.
I am really thinking of posts like this where many commenters agonize over how hard it would be to bring rationality to the masses.
In other words, my opponent believes something which is kind like being obsessed with it, and obsession is bad. If you have a beef with a particular view or argument then say so.
I did say what I have a beef with. The attitude that deliberate application of rationality is only for high-iq people, or that only the high-iq people are likely to make real contributions.
Eliezer has responded to this (very common) criticism here
It’s not a criticism—it’s an explanation for why I don’t believe it would be that difficult to package the content of the sequences for a general audience. None of it needs to be packaged as revelatory. Instead of calling rationality systematic winning, just call it a laundry list of methods for being clear-eyed and avoiding self-deception.
I don’t know why you want LW to be packaged to a wide audience. I suspect this would do more harm than good to us, and to the wider audience. It would harm the wider audience because of the sophistication bias, which would cause them to mostly look for errors in thinking in others and not their own thinking. It takes a certain amount of introspectiveness (which LW seems to self-select for) not to become half-a-rationalist.
Several responses bring up the “half-a-rationalist” criticism, but I think that’s something that can be avoided by presentation. Instead of “here’s a bunch of tools to be cleverer than other people”, present it as “here’s a bunch of tools to occasionally catch yourself before you make a dumb mistake”. It’s certainly no excuse not to try to think of how a more broadly-targeted presentation of the sequences could be put together.
And really, what’s the worst case-scenario? That articles here sometimes get cited vacuously kind of like those fallacy lists? Not that bad.
Inclusiveness is not a terminal value for me. Certain types of people are attracted to a community such as LW, as with every other type of community. I do not see this as a problem.
Which of the following statements would you endorse if any?
1) LW should change in such a way as to be more inclusive to a wider variety of people.
1 a) LW members should change how they comment (perhaps avoiding jargon?) so as to be more inclusive.
1 b) LW members should talk change the topics that they discuss in order to be more inclusive.
2) LW should compose a rewritten set of sequences to replace the current sequence as a way of making the community more inclusive.
3) LW should compose a rewritten set of sequences and publish it somewhere (perhaps a book or a different website)) to spread the tools of rationality.
4) LW should try to actively recruit different types of people than the ones that are naturally inclined to read it already.
I don’t think LW needs to change dramatically (though more activity would be nice), I just think it should be acknowledged that the demographic focus is narrow; a wider focus could mean a new community or a growth of LW, or something else.
Mainly #3 and to an extent #4.
I’d modify and combine #4 and #1a/1b into:
5) We should have inclusionary, non-jargony explanations and examples at the ready to express almost any idea on rationality that we understand within LW’s context. Especially ideas that have mainstream analogues, which is most of them. This has many potential uses including #1 and #4.
But is he actually planning to change his style? He’s more or less explicitly targeted the bay-area/software-programmer/secular/transhumanist, and he’s openly stated that he’s content with that focus.
I don’t have any more information. The book has been mentioned a few times on LW, but I don’t know what stage it’s at, and I haven’t seen any of the text.
It is a selection of 345 articles, together over 2000 pages, mostly from the old Sequences from Overcoming Bias era, and a few long articles from Eliezer’s homepage. The less imporant articles are removed, and the quantum physics part is heavily reduced.
(I have a draft because I am translating it to Slovak, but I am not allowed to share it. Maybe you could still volunteer as a proofreader, to get a copy.)
LessWrong’s focus on the bay-area/software-programmer/secular/transhumanist crowd seems to me unnecessary. I understand that that’s how the organization got its start, and it’s fine. But when people here tie rationality to being part of that subset, or to high-IQ in general, it seems a bit silly (I also find the near-obsession with IQ a bit unsettling).
If the sequences were being repackaged as a self-help book targeted towards the widest possible audience, what would they look like?
Some of the material is essentially millenia old, self-knowledge and self-awareness and introspection aren’t new inventions. Any decent therapist will also try to get people to see the “outside view” of their actions. Transhumanism and x-risk probably wouldn’t belong in this book. Bayesian reasoning and cognitive fallacies have plenty of popular descriptions around them.
Effective altruism doesn’t need to be tied to utilitarianism or terms like QALYs. Look at the way the Gates Foundation describes its work, for instance.
The hardline secularism is probably alienating (and frankly, are there not many people for whom at least the outward appearance of belief is rational, when it is what ties them to their communities?) to many people who could still learn a lot. Science can be promoted as an alternative to mysticism in a way that isn’t hostile and doesn’t provoke instant dismissal by those who most need that alternative.
Am I missing anything here? Is there some large component of rationalism that can’t be severed from the way it’s packaged on this site and sites like it?
For all the emphasis on Slytherin-style interpersonal competence (not so much on the main site anymore, but it’s easy to find in the archive and in Methods), LW’s historically had a pretty serious blind spot when it comes to PR and other large-scale social phenomena. There’s probably some basic typical-minding in this, but I’m inclined to treat it mostly as a subculture issue; American geek culture has a pretty solid exceptionalist streak to it, and treats outsiders with pity when it isn’t treating them with contempt and suspicion. And we’re very much tied to geek culture. I’ve talked to LWers who don’t feel comfortable exercising because they feel like it’s enemy clothing; if we can’t handle something that superficial, how are we supposed to get into Joe Sixpack’s head?
Ultimately I think we focus on contrarian technocrat types, consciously or not, because they’re the people we know how to reach. I include myself in this, unfortunately.
A very fair assessment.
I would also note that often when people DO think about marketing LW, they speak about the act of marketing with outright contempt. Marketing is just a set of methodologies to draw attention to something. As a rationalist, one should embrace that tool for anything they care about rather than treating it as vulgar.
A better question is what exactly we are supposed to do inside Joe Sixpack’s head?
Make him less stupid? No one knows how. Give him practical advice so that he fails less epically? There are multiple shelves of self-help books at B&N, programs run by nonprofits and the government, classes at the local community college, etc. etc. Joe Sixpack shows very little interest in any of those I don’t see why the Sequences or some distillation of them would do better.
Nice example of geek exceptionalism there, dude.
To be fair, it might have some merit if we were literally talking about the average person, though I’m far from certain; someone buys an awful lot of mass-market self-help books and I don’t think it’s exclusively Bay Aryans. But I was using “Joe Sixpack” there in the sense of “someone who is not a geek”, or even “someone who isn’t part of the specific cluster of techies that LW draws from”, and there should be plenty of smart, motivated, growth-oriented people within that set. If we can’t speak to them, that’s entirely on us.
Nah, just plain-vanilla arrogance :-D I am not quite sure I belong to the American geek culture, anyway.
Ah. I read “Joe Sixpack” as being slightly above “redneck” and slightly below “your average American with 2.2 children”.
So do you mean people like engineers, financial quants, the Make community, bright-eyed humanities graduates? These people are generally not dumb. But I am still having trouble imagining what would you want to do inside their heads.
The first group of people I thought of was lawyers, who have both a higher-than-average baseline understanding of applied cognitive science and a strong built-in incentive to get better at it. I wouldn’t stop there, of course; all sorts of people have reasons to improve their thinking and understanding, and even more have incentives to become more instrumentally effective.
As to what we’d do in their heads… same thing as we’re trying to do in ours, of course.
Um. Speaking for myself, what I’m trying to do in my own head doesn’t really transfer to other heads, and I’m not trying to do anything (serious) inside other people’s heads in general.
The hardline secularism (which might be better described as a community norm of atheism, given that some of the community favors creating community structures which take on the role of religious participation,) isn’t a prerequisite so much as a conclusion, but it’s one that’s generally held within the community to be pretty basic.
However, so many of the lessons of epistemic rationality bear on religious belief that not addressing the matter at all would probably smack of willful avoidance.
In a sense, rationality might function as an alternative to mysticism. Eliezer has spoken for instance about how he tries to present certain lessons of rationality as deeply wise so that people will not come to it looking for wisdom, find simple “answers,” and be tempted to look for deep wisdom elsewhere. But there’s another very important sense where, if you treat rationality like mysticism, the result is that you’ll completely fuck up at rationality, and get a group that worships some “rational” sounding buzzwords without gaining any useful insight into reasoning.
Keep in mind that insofar as Less Wrong has educational goals, it’s not trying to reach as wide an audience as possible, it’s trying to teach as many people as possible to get it right. If “reaching” an audience means instilling them with some memes which don’t have much use in isolation, while leaving out important components of rationality, that measure has basically failed.
Given that Eliezer wrote HPMOR is not really turning away from mysticism and teaching through stories.
One would expect an alternative to a thing to share enough characteristics with the thing to make it an alternative.
Turkey is an alternative to chicken. Ice cream is not. Teaching rationality through stories and deep-wisdom tropes is an alternative to teaching mysticism through stories and deep-wisdom tropes. Teaching rationality through academic papers is not.
More simple language, many examples, many exercises.
And then the biggest problem would be that most people would just skip the exercises, remember some keywords, and think that it made them more rational.
By which I mean that making the book more accessible is a good thing, and we definitely should do it. But rationality also requires some effort from the reader, that cannot be completely substituted by the book. We could reach a wider audience, but it would still be just a tiny minority of the population. Most people just wouldn’t care enough to really do the rationality stuff.
Which means that the book should start with some motivating examples. But even that has limited effect.
I believe there is a huge space for improvement, but we shouldn’t expect magic even with the best materials. There is only so much even the best book can do.
The problem is, using these millenia old methods people can generate a lot of nonsense. And they predictably do, most of the time. Otherwise, Freud would have already invented rationality, founded CFAR, became a beisutsukai master, built a Friendly AI, and started the Singularity. (Unless Aristotle or Socrates would already do it first.) Instead, he just discovered that everything you dream about is secretly a penis.
The difficult part is to avoid self-deception. These millenia old materials seem quite bad at it. Maybe they were best of what was available at their time. But that’s not enough. Archimedes could have been the smartest physicist of his time, but he still didn’t invent relativity. Being “best” is not enough; you have to do things correctly.
Okay, this is true. But LessWrong is currently a set of articles. So the medium is essentially unchanged, and any of these criticisms apply to the current form. And how many people do you think the article on akrasia has actually cured of akrasia?
First of all, I’m mainly dealing with the subset of material here that deals with self-knowledge. Even if you disagree with “millenia old”, do you disagree with “any decent therapist would try to provide many/most of these tools to his/her patients”?
On the more scientific side, the idea of optimal scientific inquiry has been refined over the years, but the core of observation, experimentation and modeling is hardly new either.
I do not see what you mean here. Nobody at LW has invented rationality, become a beisutsukai master, built a Friendly AI or Started the singularity. Freud correctly realized the importance the subconscious has in shaping our behavior, and the fact that it is shaped by past experiences in ways not always clear to us. He then failed to separate this knowledge from some personal obsessions. We wouldn’t expect any methods of rationality to turn Freud into a superhero, we’d expect it to help people reading him separate the wheat from the chaff.
And also an e-book (which is probably not finished yet, last mention here), that is still just a set of articles, but they are selected, reordered, and the comments are removed—which is helpful, at least for readers like me, because when I read the web, I cannot resist reading the comments (which together can be 10 times as long as the article) and clicking hyperlinks, but when I read the book, I obediently follow the page flow.
A good writer could then take this book as a starting point, and rewrite it, with exercises. But for this we need a volunteer, because Eliezer is not going to do it. And the volunteer needs to have some skills.
Akrasia survey data analysis. Some methods seem to work for some people, but no method is universally useful. The highest success was “exercise to increase energy” and even that helped only 25% of people; and the critical weakness seems to be that most people think it is a good idea, but don’t do it. To overcome this, we would need some off-line solutions, like exercising together. (Or maybe a “LessWrong Virtual Exercise Hall”.)
Yes, I do. Therapists don’t see teaching rationality as their job (although it correlates), wouldn’t agree with some parts of our definitions of rationality (many of them are religious, or enjoy some kind of mysticism), and would consider some parts too technical and irrelevant for mental health (Bayes Rule, Solomonoff Prior, neural networks...).
But when you remove the technical details, what is left is pretty much “do things that seem reasonable”. Which still would be a huge improvement for many people.
That’s the theory. Now look at the practice of… say, medicine. How much of it really is evidence-based, and how much of that is double-blind with control group and large enough sample and meta-analysis et cetera? When you start looking at it closely, actually very little. (If you want a horror story, read about Ignaz Semmelweis, who discovered how to save lifes of thousands of people and provided hard evidence… and how the medical community rewarded him.)
LessWrong activity seems to shift more into meatspace as time goes on.
We have the study hall for people with akrasia that provides different help then just reading an article about akrasia.
CFAR partly did grow out of LW and they hold workshops.
I don’t understand what this means. LW is composed mostly from people of these backgrounds. Are you saying that this a problem?
If by rationality you mean systematic winning (where winning can be either truth seeking (epistemic rationality) or goal achieving (instrumental rationality)) then no one is claiming that we have a monopoly on it. But if by rationality, you are referring to the group of people who have decided to study it and form a community around it, then yes most of us are high IQ and in technical fields. And if you think this is a problem, I’d be interested in why.
In other words, my opponent believes something which is kind like being obsessed with it, and obsession is bad. If you have a beef with a particular view or argument then say so.
Eliezer has responded to this (very common) criticism here
I don’t know why you want LW to be packaged to a wide audience. I suspect this would do more harm than good to us, and to the wider audience. It would harm the wider audience because of the sophistication bias, which would cause them to mostly look for errors in thinking in others and not their own thinking. It takes a certain amount of introspectiveness (which LW seems to self-select for) not to become half-a-rationalist.
If it creates an exclusionary atmosphere, or prevents people outside that group from reading and absorbing the ideas, or closes this community to outside ideas, then yes. But mostly I think that focusing on presenting these ideas only to that group is unnecessary.
I am really thinking of posts like this where many commenters agonize over how hard it would be to bring rationality to the masses.
I did say what I have a beef with. The attitude that deliberate application of rationality is only for high-iq people, or that only the high-iq people are likely to make real contributions.
It’s not a criticism—it’s an explanation for why I don’t believe it would be that difficult to package the content of the sequences for a general audience. None of it needs to be packaged as revelatory. Instead of calling rationality systematic winning, just call it a laundry list of methods for being clear-eyed and avoiding self-deception.
Several responses bring up the “half-a-rationalist” criticism, but I think that’s something that can be avoided by presentation. Instead of “here’s a bunch of tools to be cleverer than other people”, present it as “here’s a bunch of tools to occasionally catch yourself before you make a dumb mistake”. It’s certainly no excuse not to try to think of how a more broadly-targeted presentation of the sequences could be put together.
And really, what’s the worst case-scenario? That articles here sometimes get cited vacuously kind of like those fallacy lists? Not that bad.
Inclusiveness is not a terminal value for me. Certain types of people are attracted to a community such as LW, as with every other type of community. I do not see this as a problem.
Which of the following statements would you endorse if any?
1) LW should change in such a way as to be more inclusive to a wider variety of people.
1 a) LW members should change how they comment (perhaps avoiding jargon?) so as to be more inclusive.
1 b) LW members should talk change the topics that they discuss in order to be more inclusive.
2) LW should compose a rewritten set of sequences to replace the current sequence as a way of making the community more inclusive.
3) LW should compose a rewritten set of sequences and publish it somewhere (perhaps a book or a different website)) to spread the tools of rationality.
4) LW should try to actively recruit different types of people than the ones that are naturally inclined to read it already.
I don’t think LW needs to change dramatically (though more activity would be nice), I just think it should be acknowledged that the demographic focus is narrow; a wider focus could mean a new community or a growth of LW, or something else.
Mainly #3 and to an extent #4.
I’d modify and combine #4 and #1a/1b into:
5) We should have inclusionary, non-jargony explanations and examples at the ready to express almost any idea on rationality that we understand within LW’s context. Especially ideas that have mainstream analogues, which is most of them. This has many potential uses including #1 and #4.
What practical steps do you see to make LW less focused on that crowd? What are you advocating?
The book that Eliezer is writing? (What’s the state of play on that, btw?)
Link for info?
But is he actually planning to change his style? He’s more or less explicitly targeted the bay-area/software-programmer/secular/transhumanist, and he’s openly stated that he’s content with that focus.
I don’t have any more information. The book has been mentioned a few times on LW, but I don’t know what stage it’s at, and I haven’t seen any of the text.
It is a selection of 345 articles, together over 2000 pages, mostly from the old Sequences from Overcoming Bias era, and a few long articles from Eliezer’s homepage. The less imporant articles are removed, and the quantum physics part is heavily reduced.
(I have a draft because I am translating it to Slovak, but I am not allowed to share it. Maybe you could still volunteer as a proofreader, to get a copy.)