I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don’t know about live events). Specifically my points of concern are:
The “rationalist” identity: It creates the illusion that by identifying as a “rationalist” and displaying the correct tribal insignia you are automatically more rational, or at least “less wrong” than the outsiders.
Rituals: Deliberated modelled after religious rituals, including “public confession” sessions AFAIK similar to those performed by cults like the Church of Synanon.
MIRI: I agree with you that they probably exaggerate the AI risk, and I doubt they have the competence to do much about it anyway. For its first ten or so years, when manned primarily by Eliezer Yudkowsky, Anna Salamon, etc., the organization produced effectively zero valuable research output. In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I’m still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.
CFAR: a self-help/personal-development program. Questionable like all the self-help/personal-development programs in existence. If I understand correctly, CFAR is modelled after, or at least is similar to, Landmark, a controversial organization.
Pseudo-scientific beliefs and practices: cryonics (you are signed up, so you don’t probably agree), paleo diets/ketogenic diets, armchair evopsych, and so on. It seems to me that as long as something is dressed in a sufficiently “sciency” language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it. Yes, this kind of effects happen in all groups, but from a group of people with average IQ 140 who pride in pursuing rationality I would have expected better.
In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I’m still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.
We’ve released a new paper recently accepted to the MIPC workshop at AAAI-14: “Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem” by LaVictoire et al.
We’ve released a new working paper by Benja Fallenstein and Nate Soares, “Problems of self-reference in self-improving space-time embedded intelligence.” [...]
Update 05/14/14: This paper has been accepted to AGI-14.
We only really agree on the first point. I’m skeptical of CFAR and the ritual crew but don’t find these supposed comparisons to be particularly apt.
I’ve watched MIRI improve their research program dramatically over the past four four years, and expect it to improve. Yes, obviously they had some growing pains in learning how to publish, but everyone who tries to do publishable work goes through that phase (myself included).
I’m not on board with the fifth point:
cryonics (you are signed up, so you don’t probably agree)
Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so (“P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]”). At the least I’d say it’s a controversial topic here, for all the usual reasons. (No, I’m not signed up for cryonics. No, I don’t think it’s very likely to work.)
paleo diets/ketogenic diets
Most of the comments on What is the evidence in favor of paleo? are skeptical. The comment with highest karma is very skeptical. Lukeprog said he’s skeptical and EY said it didn’t work for him.
armchair evopsych
Not really sure what you’re referring to.
Surprised you didn’t bring up MWI; that’s the usual hobby horse for this kind of criticism.
We only really agree on the first point. I’m skeptical of CFAR and the ritual crew but don’t find these supposed comparisons to be particularly apt.
Ok.
I’ve watched MIRI improve their research program dramatically over the past four four years, and expect it to improve.
I agree that it improved dramatically, but only because the starting point was so low. In recent years they released some very technical results. I think that some are probably wrong or trivial while others are probably correct and interesting, but I don’t have the expertise to properly evaluate them, and this probably applies to most other people as well, which is why I think MIRI should seek peer-review by independent experts.
Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so (“P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]”). At the least I’d say it’s a controversial topic here, for all the usual reasons. (No, I’m not signed up for cryonics. No, I don’t think it’s very likely to work.)
As I said, these beliefs aren’t necessarily held by a majority of lesswrongers, but are unusually common.
Surprised you didn’t bring up MWI; that’s the usual hobby horse for this kind of criticism.
MWI isn’t pseudo-scientific per se. However, the claim that MWI is obviously true and whoever thinks otherwise must be ignorant or irrational is.
I agree that it improved dramatically, but only because the starting point was so low.
The starting point is always low. Your criticism applies to me, a mainstream, applied mathematics graduate student.
I started research in my area around 2009.
I have two accepted papers, both of which are relatively technical but otherwise minor results.
I also wasn’t working on two massive popularization projects, obtaining funding, courting researchers (well, I flirted a little bit) and so on.
Applied math is widely regarded as having a low barrier to publication, with acceptable peer-review times in the six to eighteen month range. (Anecdote: My first paper took nine months from draft to publication; my second took seven months so far and isn’t in print yet. My academic brother’s main publication took twenty months.) I think it’s reasonable to consider this a lower bound on publications in game theory, decision theory, and mathematical logic.
Considering this, even if MIRI had sought to publish some of their technical writings in independent journals, we probably wouldn’t know if most of them had been either accepted or rejected by now. If things don’t change in five years, then I’ll concede that their research program hasn’t been particularly effective.
It seems to me that as long as something is dressed in a sufficiently “sciency” language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.
I’d still really love a better term than that. One that doesn’t use the R-word at all, if possible. (“Neorationalism” is tempting but similarly well below ideal.)
Since that is exactly what is being claimed about it, one might as well put it in the name. It does use the R-word, but only to negate it, which is the point. “New rationalism” suggests there is something wrong with actually being rational, which I hope isn’t anyone’s intention in this thread.
Trouble is that echoes “pseudoskeptic”, which is a term that should be useful but is overwhelmingly used only by those upset at their personal toe being stepped on (“critiquing me? You’re doing skepticism wrong!”), to the point where it’s a pretty useful crank detector.
That is not a problem with the word but the thing. It does not matter what opposition to bad skepticism is called. If it exists as a definite idea, it will acquire a name, and whatever name it is called by will be used in that way.
“New rationalism” is even worse: the name suggests not that there is such a thing as bad reasoning, but that reasoning is bad.
Perhaps a better idea would be to not call it anything, nor make of it a thing. Instead, someone dissatisfied with how it is being done on LW might more fruitfully devote their energies to demonstrating how to do it better.
a term that should be useful but is overwhelmingly used only by those upset at their personal toe being stepped on (“critiquing me? You’re doing skepticism wrong!”), to the point where it’s a pretty useful crank detector.
Well, isn’t that a self-evidently dangerous heuristic. (“Critiquing me? You’re just doing the calling-me-a-pseudoskeptic crank behavior!”)
I don’t think that either armchair evopsych or the paleo movement are characterised by meta reasoning. Most individuals who believe in those things aren’t on LW.
It seems to me that as long as something is dressed in a sufficiently “sciency” language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.
What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW.
Rituals: Deliberated modelled after religious rituals, including “public confession” sessions
Could you provide a link? I’m not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.
What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW.
Sure, but I’d expect that a community devoted to “refining the art of human rationality” would be more skeptical of that type of claims.
Anyway, I’m not saying that LessWrong is a terribly diseased community. If I thought it was, I wouldn’t be hanging around here. I was just expressing my concerns about some aspects of the local culture.
Could you provide a link? I’m not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.
I read LessWrong primarily for entertainment value, but I share your concerns about some aspects of the surrounding culture, although in fairness it seems to have got better in recent years (at least as far as it is apparent from the online forum. I don’t know about live events).
Specifically my points of concern are:
The “rationalist” identity: It creates the illusion that by identifying as a “rationalist” and displaying the correct tribal insignia you are automatically more rational, or at least “less wrong” than the outsiders.
Rituals: Deliberated modelled after religious rituals, including “public confession” sessions AFAIK similar to those performed by cults like the Church of Synanon.
MIRI: I agree with you that they probably exaggerate the AI risk, and I doubt they have the competence to do much about it anyway. For its first ten or so years, when manned primarily by Eliezer Yudkowsky, Anna Salamon, etc., the organization produced effectively zero valuable research output. In recent years, under the direction of Luke Muehlhauser, with researchers such as Paul Christiano and the other younger guns, they may have got better, but I’m still waiting to see any technical result of theirs being published in a peer reviewed journal or conference.
CFAR: a self-help/personal-development program. Questionable like all the self-help/personal-development programs in existence. If I understand correctly, CFAR is modelled after, or at least is similar to, Landmark, a controversial organization.
Pseudo-scientific beliefs and practices: cryonics (you are signed up, so you don’t probably agree), paleo diets/ketogenic diets, armchair evopsych, and so on. It seems to me that as long as something is dressed in a sufficiently “sciency” language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it. Yes, this kind of effects happen in all groups, but from a group of people with average IQ 140 who pride in pursuing rationality I would have expected better.
http://intelligence.org/2014/05/17/new-paper-program-equilibrium-prisoners-dilemma-via-lobs-theorem/ :
http://intelligence.org/2014/05/06/new-paper-problems-of-self-reference-in-self-improving-space-time-embedded-intelligence/ :
Didn’t know about that. Thanks for the update.
We only really agree on the first point. I’m skeptical of CFAR and the ritual crew but don’t find these supposed comparisons to be particularly apt.
I’ve watched MIRI improve their research program dramatically over the past four four years, and expect it to improve. Yes, obviously they had some growing pains in learning how to publish, but everyone who tries to do publishable work goes through that phase (myself included).
I’m not on board with the fifth point:
Well, 27.5% have a favorable opinion. The prior for it actually working seems optimistic but not overly so (“P(Cryonics): 22.8 + 28 (2, 10, 33) [n = 1500]”). At the least I’d say it’s a controversial topic here, for all the usual reasons. (No, I’m not signed up for cryonics. No, I don’t think it’s very likely to work.)
Most of the comments on What is the evidence in favor of paleo? are skeptical. The comment with highest karma is very skeptical. Lukeprog said he’s skeptical and EY said it didn’t work for him.
Not really sure what you’re referring to.
Surprised you didn’t bring up MWI; that’s the usual hobby horse for this kind of criticism.
Ok.
I agree that it improved dramatically, but only because the starting point was so low.
In recent years they released some very technical results. I think that some are probably wrong or trivial while others are probably correct and interesting, but I don’t have the expertise to properly evaluate them, and this probably applies to most other people as well, which is why I think MIRI should seek peer-review by independent experts.
As I said, these beliefs aren’t necessarily held by a majority of lesswrongers, but are unusually common.
MWI isn’t pseudo-scientific per se. However, the claim that MWI is obviously true and whoever thinks otherwise must be ignorant or irrational is.
The starting point is always low. Your criticism applies to me, a mainstream, applied mathematics graduate student.
I started research in my area around 2009.
I have two accepted papers, both of which are relatively technical but otherwise minor results.
I also wasn’t working on two massive popularization projects, obtaining funding, courting researchers (well, I flirted a little bit) and so on.
Applied math is widely regarded as having a low barrier to publication, with acceptable peer-review times in the six to eighteen month range. (Anecdote: My first paper took nine months from draft to publication; my second took seven months so far and isn’t in print yet. My academic brother’s main publication took twenty months.) I think it’s reasonable to consider this a lower bound on publications in game theory, decision theory, and mathematical logic.
Considering this, even if MIRI had sought to publish some of their technical writings in independent journals, we probably wouldn’t know if most of them had been either accepted or rejected by now. If things don’t change in five years, then I’ll concede that their research program hasn’t been particularly effective.
I use the term “new rationalism”.
I’d still really love a better term than that. One that doesn’t use the R-word at all, if possible. (“Neorationalism” is tempting but similarly well below ideal.)
“Pseudo-rationalism.”
Since that is exactly what is being claimed about it, one might as well put it in the name. It does use the R-word, but only to negate it, which is the point. “New rationalism” suggests there is something wrong with actually being rational, which I hope isn’t anyone’s intention in this thread.
Trouble is that echoes “pseudoskeptic”, which is a term that should be useful but is overwhelmingly used only by those upset at their personal toe being stepped on (“critiquing me? You’re doing skepticism wrong!”), to the point where it’s a pretty useful crank detector.
That is not a problem with the word but the thing. It does not matter what opposition to bad skepticism is called. If it exists as a definite idea, it will acquire a name, and whatever name it is called by will be used in that way.
“New rationalism” is even worse: the name suggests not that there is such a thing as bad reasoning, but that reasoning is bad.
Perhaps a better idea would be to not call it anything, nor make of it a thing. Instead, someone dissatisfied with how it is being done on LW might more fruitfully devote their energies to demonstrating how to do it better.
Well, isn’t that a self-evidently dangerous heuristic. (“Critiquing me? You’re just doing the calling-me-a-pseudoskeptic crank behavior!”)
I don’t think that either armchair evopsych or the paleo movement are characterised by meta reasoning. Most individuals who believe in those things aren’t on LW.
What exactly do you mean with buying into it? I think there are places on the internet with a lot more armchair evopsych than LW.
Could you provide a link? I’m not aware of that ritual in LW if you mean something more than encouraging people to admit when they are wrong.
Sure, but I’d expect that a community devoted to “refining the art of human rationality” would be more skeptical of that type of claims.
Anyway, I’m not saying that LessWrong is a terribly diseased community. If I thought it was, I wouldn’t be hanging around here. I was just expressing my concerns about some aspects of the local culture.
https://www.google.com/search?q=less+wrong+ritual&ie=utf-8&oe=utf-8#channel=fs&q=ritual+report+site:lesswrong.com
http://lesswrong.com/lw/9aw/designing_ritual/
And in particular the “Schelling Day”, which bothers me the most: http://lesswrong.com/lw/h2t/schelling_day_a_rationalist_holiday/
In that case I think you overrate the amount of energy the average person in the community invest in it. LW is very diverse as far as opinions go.
I myself dislike certain talk about signaling where sometimes armchair evopsych appear, but the idea of signaling is rooted in game theory.
There are also people on LW who do read real evopysch and make arguments on that basis.
I wasn’t aware of Schelling Day.