I think you hit nail on the head. It seems to me that LW represent bracketing by rationality—i.e. there’s lower limit below which you don’t find site interesting, there is the range where you see it as rationality community, and there’s upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.
Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no understanding of the limitations of computational processes in general (let alone ‘intelligent’ processes) is not unlike the anti-vaccination campaigning by people with no solid background in biochemistry. Donating for vaccine safety research performed by someone without solid background in biochemistry, is not only stupid, it will kill people. The computer science is no different now, that it is used for biochemical research. No honest moral individual can go ahead and speak of great harms of medically relevant technologies without first obtaining a very very solid background with solid understanding of the boring fundamentals, and with independent testing of oneself—to avoid self delusion—by doing something competitive in the field. Especially so when those concerns are not shared by the more educated or knowledgeable or accomplished individuals. The only way it could be honest is if one is to honestly believe oneself to be a lot, lot, lot smarter than the smartest people on Earth, and one can’t honestly believe such a thing without either accomplishing something impressive that great number of smartest people failed to accomplish, or being a fool.
Are you aware of another online community where people more rational than LWers gather? If not, any ideas about how to create such a community?
Also, if someone was worried about the possibility of a bad singularity, but didn’t think that supporting SIAI was a good way to address that concern, what should they do instead?
Are you aware of another online community where people more rational than LWers gather?
Instrumental rationality, i.e. “winning”? Lots...
Epistemic rationality? None...
Also, if someone was worried about the possibility of a bad singularity, but didn’t think that supporting SIAI was a good way to address that concern, what should they do instead?
Tell SIAI why they don’t support them and thereby provide an incentive to change.
I’m not sure it got that either. It’s more like medieval theology / scholasticism. There are questions you think you need answered, you can’t answer them now with logical thought, you use empty cargo cult imitation of reasonable thought. How rational is that? Not rational at all. Wei_Dai is here because he was concerned with AI, and calls this community rational because he sees concern with AI as rational and needs confirmation. It is neatly circular system—if the concern with AI is rational, then every community that is rational must be concerned with AI, then the communities that are not concerned with AI are less rational.
I notice that you didn’t actually answer any of my questions. Earlier you said ” there’s upper
limit above which you would see it as self
important pompous fools being very wrong on
some few topics and not interesting on other
topics”. It seems to me if that was actually the case, then there would be communities of such people taking about topics they think are interesting, and in a way that is noticeably more rational than typical discussions on lesswrong. If you are right, why can’t you give an example, or at least be very interested in trying to create such a community?
Note that my question isn’t purely rhetorical. If such a community actually exists then I’d like to join or at least eavesdrop on them.
Weighing back in here, I will clarify my comment which the comment you quote was based on, that my OH had this precise thought (“self-important pompous fools”) when he came across this site initially. The content of the sequences he found trivial. He generally finds it easy to be rational, and didn’t see the point of getting a community together to learn how to be more rational. In fact, it’s a large (reverse) inferential distance for him just to understand that some people find figuring these ideas out actually non-trivial (and yet still care about them). He doesn’t understand how people can compartmentalise their minds at all.
Very few people sort themselves in bands according to “rationality”, and my OH takes part in just regular discussions with regular smart people, except he’s better at correcting wrong arguments than most. “Some people being unfixably wrong about things” is just a part of life for him, and without ideas like transhumanism to motivate you, it’s quite hard to bring yourself to care about how wrong the rest of the world is—just being right yourself is sufficient.
Thanks for this explanation. Does your OH participate in discussion here? If so does he enjoy them (more than discussions with “regular smart people”)? Do you or he have any suggestions how we might better attract people like him (i.e., who are “naturally” rational and find it hard to understand at first why Eliezer is making a big deal out of “rationality”)?
He doesn’t participate in discussions here, because he doesn’t think he has anything new to add. (This discussion is starting to prove otherwise though!)
I asked him about what more could be done to attract people like him. He said: “I think what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered). That would give a stronger incentive to read on.”
So basically, what would help that is having an intro page which says “here’s where to start, but if you know the basics already, here’s some interesting problems to draw you in”
The other problem for him is a lot of the content reading like what he calls ‘pulp-philosophy’ - being to philosophy what pulp fiction is to literature. “If you find an average philosophy blog, it is either uninteresting or wrong, but has a really inflated view of itself. There is a lot of philosophy/rationality stuff on the internet, which had primed me to just ignore that kind of website.”
If there is a way, then, to distinguish LW from less good websites, without worsening other problems with other audiences, that might be good, though I personally can’t think of anything that would help on this front.
what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered
He did, that’s what prompted the statement. He found them really interesting, and almost got to the right answer before one of our friends spoilered it, but he enjoyed the challenge and does enjoy thinking about things that way.
Implied in your so called ‘question’ is the statement that any online community that you know of (I shouldn’t assume you know of 0 other communities, right?), you deemed less rational than lesswrong. I would say lesswrong is substantially less rational than average, i.e. if you pick a community at random, it is typically more rational than lesswrong. You can choose any place better than average—physicsforums, gamedev.net, stackexchange, arstechnica observatory, and so on, those are all more rational than LW. But of course, implied in your question is that you won’t accept this answer. The LW is rather interested in AI, and the talk about AI here is significantly less rational than talk of almost any technical topic in almost any community of people with technical interest. You would have to go to some alternative energy forums or ufo or conspiracy theorist place to find a match in terms of irrationality of the discussion of the topics of interest.
You would have no problem what so ever finding and joining a more rational place, if you were looking for one. That is why your ‘question’ is in fact almost purely rhetorical (or you are looking for a place that is more ‘foo’ than lesswrong, where you use word ‘rationality’ in place of ‘foo’).
physicsforums, gamedev.net, stackexchange, arstechnica observatory, and so on, those are all more rational than LW.
Can you list some specific examples of irrational thinking patterns that occur on LessWrong but not on those communities? The one guess I can make is that they’re all technical-sounding, in which case they might exist in the context of a discipline that has lots of well-defined rules and methods for testing success, and so less “bullshit” gets through because it obviously violates the rules of X-technical-discipline. Is this what you mean, or is it something else entirely?
I see, I had taken your earlier comment (the one I originally replied to) as saying that lesswrong was above average but there were even more rational people elsewhere (otherwise I probably wouldn’t have bothered to reply). But since we’re already talking, if you actually think it’s below average, what are you hoping to accomplish by participating here?
The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.
I think you hit nail on the head. It seems to me that LW represent bracketing by rationality—i.e. there’s lower limit below which you don’t find site interesting, there is the range where you see it as rationality community, and there’s upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.
Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no understanding of the limitations of computational processes in general (let alone ‘intelligent’ processes) is not unlike the anti-vaccination campaigning by people with no solid background in biochemistry. Donating for vaccine safety research performed by someone without solid background in biochemistry, is not only stupid, it will kill people. The computer science is no different now, that it is used for biochemical research. No honest moral individual can go ahead and speak of great harms of medically relevant technologies without first obtaining a very very solid background with solid understanding of the boring fundamentals, and with independent testing of oneself—to avoid self delusion—by doing something competitive in the field. Especially so when those concerns are not shared by the more educated or knowledgeable or accomplished individuals. The only way it could be honest is if one is to honestly believe oneself to be a lot, lot, lot smarter than the smartest people on Earth, and one can’t honestly believe such a thing without either accomplishing something impressive that great number of smartest people failed to accomplish, or being a fool.
Are you aware of another online community where people more rational than LWers gather? If not, any ideas about how to create such a community?
Also, if someone was worried about the possibility of a bad singularity, but didn’t think that supporting SIAI was a good way to address that concern, what should they do instead?
Instrumental rationality, i.e. “winning”? Lots...
Epistemic rationality? None...
Tell SIAI why they don’t support them and thereby provide an incentive to change.
Precisely.
I’m not sure it got that either. It’s more like medieval theology / scholasticism. There are questions you think you need answered, you can’t answer them now with logical thought, you use empty cargo cult imitation of reasonable thought. How rational is that? Not rational at all. Wei_Dai is here because he was concerned with AI, and calls this community rational because he sees concern with AI as rational and needs confirmation. It is neatly circular system—if the concern with AI is rational, then every community that is rational must be concerned with AI, then the communities that are not concerned with AI are less rational.
I notice that you didn’t actually answer any of my questions. Earlier you said ” there’s upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics”. It seems to me if that was actually the case, then there would be communities of such people taking about topics they think are interesting, and in a way that is noticeably more rational than typical discussions on lesswrong. If you are right, why can’t you give an example, or at least be very interested in trying to create such a community?
Note that my question isn’t purely rhetorical. If such a community actually exists then I’d like to join or at least eavesdrop on them.
Weighing back in here, I will clarify my comment which the comment you quote was based on, that my OH had this precise thought (“self-important pompous fools”) when he came across this site initially. The content of the sequences he found trivial. He generally finds it easy to be rational, and didn’t see the point of getting a community together to learn how to be more rational. In fact, it’s a large (reverse) inferential distance for him just to understand that some people find figuring these ideas out actually non-trivial (and yet still care about them). He doesn’t understand how people can compartmentalise their minds at all.
Very few people sort themselves in bands according to “rationality”, and my OH takes part in just regular discussions with regular smart people, except he’s better at correcting wrong arguments than most. “Some people being unfixably wrong about things” is just a part of life for him, and without ideas like transhumanism to motivate you, it’s quite hard to bring yourself to care about how wrong the rest of the world is—just being right yourself is sufficient.
Thanks for this explanation. Does your OH participate in discussion here? If so does he enjoy them (more than discussions with “regular smart people”)? Do you or he have any suggestions how we might better attract people like him (i.e., who are “naturally” rational and find it hard to understand at first why Eliezer is making a big deal out of “rationality”)?
He doesn’t participate in discussions here, because he doesn’t think he has anything new to add. (This discussion is starting to prove otherwise though!)
I asked him about what more could be done to attract people like him. He said: “I think what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered). That would give a stronger incentive to read on.”
So basically, what would help that is having an intro page which says “here’s where to start, but if you know the basics already, here’s some interesting problems to draw you in”
The other problem for him is a lot of the content reading like what he calls ‘pulp-philosophy’ - being to philosophy what pulp fiction is to literature. “If you find an average philosophy blog, it is either uninteresting or wrong, but has a really inflated view of itself. There is a lot of philosophy/rationality stuff on the internet, which had primed me to just ignore that kind of website.”
If there is a way, then, to distinguish LW from less good websites, without worsening other problems with other audiences, that might be good, though I personally can’t think of anything that would help on this front.
Did your OH read Yudkowsky’s posts on free will? If so, what does he think of them?
He did, that’s what prompted the statement. He found them really interesting, and almost got to the right answer before one of our friends spoilered it, but he enjoyed the challenge and does enjoy thinking about things that way.
Implied in your so called ‘question’ is the statement that any online community that you know of (I shouldn’t assume you know of 0 other communities, right?), you deemed less rational than lesswrong. I would say lesswrong is substantially less rational than average, i.e. if you pick a community at random, it is typically more rational than lesswrong. You can choose any place better than average—physicsforums, gamedev.net, stackexchange, arstechnica observatory, and so on, those are all more rational than LW. But of course, implied in your question is that you won’t accept this answer. The LW is rather interested in AI, and the talk about AI here is significantly less rational than talk of almost any technical topic in almost any community of people with technical interest. You would have to go to some alternative energy forums or ufo or conspiracy theorist place to find a match in terms of irrationality of the discussion of the topics of interest.
You would have no problem what so ever finding and joining a more rational place, if you were looking for one. That is why your ‘question’ is in fact almost purely rhetorical (or you are looking for a place that is more ‘foo’ than lesswrong, where you use word ‘rationality’ in place of ‘foo’).
Can you list some specific examples of irrational thinking patterns that occur on LessWrong but not on those communities? The one guess I can make is that they’re all technical-sounding, in which case they might exist in the context of a discipline that has lots of well-defined rules and methods for testing success, and so less “bullshit” gets through because it obviously violates the rules of X-technical-discipline. Is this what you mean, or is it something else entirely?
I see, I had taken your earlier comment (the one I originally replied to) as saying that lesswrong was above average but there were even more rational people elsewhere (otherwise I probably wouldn’t have bothered to reply). But since we’re already talking, if you actually think it’s below average, what are you hoping to accomplish by participating here?
The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.
How did Eliezer create LW?