Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational.
As far as I know there’s been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.
There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests—search for calibration).
What I don’t know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers.
I caution against jumping quickly to conclusions about “signalling”. Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency).
As far as “seeming clever”, perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I’m sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of “non-high-IQ” humans.
Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational.
I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.
I don’t know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you’re being elitist or crazy doesn’t necessarily help you avoid the label.
Huh? If the outside view tells you that there’s something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you’re personally involved in objectively by taking a step back. I’m saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational.
But now that you’ve brought it up, I’d also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.
The outside view isn’t magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it’s hard to say how well it generalizes outside that domain.
Okay, you’re doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:
LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think? LW2: What makes you think Less Wrong isn’t rational? LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That’s a pretty decent indicator. LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper. LW1: Uh, well unless I actually made a mistake in applying the outside view I don’t see why that’s relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference. LW4: You are misusing the term inference! Here, someone wrote a post about this at some point. LW5: Yea but that post has theoretical limitations. LW1: I don’t care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were actually talking about. LW6: I agree, people here use LW jargon as as a form of applause light! LW1: Uh... LW7: You know, accusing others of using applause lights is a fully generalized counter argument! LW6: Oh yea? Well fully generalized counter arguments are fully generalized counter arguments themselves, so there!
We’re only at LW3 right now so maybe this conversation can still be saved from becoming typical Less Wrong-style meta screwery. Or to make my point more politely: Please tell me whether or not you think Less Wrong is rational and whether or not something should be done, because that’s the thing we’re actually talking about.
Dude, my post was precisely about how you’re making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here’s the long version, stripped of jargon because I’m cool like that.
The point of the planning fallacy experiments is that we’re bad at estimating the time we’re going to spend on stuff, mainly because we tend to ignore time sinks that aren’t explicitly part of our model. My boss asks me how long I’m going to spend on a task: I can either look at all the subtasks involved and add up the time they’ll take (the inside view), or I can look at similar tasks I’ve done in the past and report how long they took me (the outside view). The latter is going to be larger, and it’s usually going to be more accurate.
That’s a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don’t have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it’s equivalent to saying “this looks to me like a $SCENARIO1”. As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one’s going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain’s centrality heuristic, but crying “outside view” is not one of them.
As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you’re really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it’s a boring question.
Ok, I guess I could have inferred your meaning from your original post, so sorry if my reply was too snarky. But seriously, if that’s your point I would have just made it like this:
“Dude you’re only supposed to use the phrase ‘outside view’ with regards to the planning fallacy, because we don’t know if the technique generalizes well.”
And then I’d go back and change “take a step back and look at it from the outside view” into “take a step back and look at it from an objective point of view” to prevent confusion, and upvote you for taking the time to correct my usage of the phrase.
My guess is that the site is “probably helping people who are trying to improve”, because I would expect some of the materials here to help. I have certainly found a number of materials useful.
But a personal judgement probably helping” isn’t the kind of thing you’d want. It’d be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.
My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative.
Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I’ve missed the part where it’s comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory.
And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that’s a unique experience.
Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational.
As far as I know there’s been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.
There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests—search for calibration).
What I don’t know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers.
I caution against jumping quickly to conclusions about “signalling”. Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency).
As far as “seeming clever”, perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I’m sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of “non-high-IQ” humans.
Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational.
I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.
I don’t know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you’re being elitist or crazy doesn’t necessarily help you avoid the label.
http://lesswrong.com/lw/kg/expecting_short_inferential_distances/
Huh? If the outside view tells you that there’s something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you’re personally involved in objectively by taking a step back. I’m saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational.
But now that you’ve brought it up, I’d also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.
The outside view isn’t magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it’s hard to say how well it generalizes outside that domain.
Don’t take this as quoting scripture, but this has been discussed before, in some detail.
Okay, you’re doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:
LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think?
LW2: What makes you think Less Wrong isn’t rational?
LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That’s a pretty decent indicator.
LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper.
LW1: Uh, well unless I actually made a mistake in applying the outside view I don’t see why that’s relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference.
LW4: You are misusing the term inference! Here, someone wrote a post about this at some point.
LW5: Yea but that post has theoretical limitations.
LW1: I don’t care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were actually talking about.
LW6: I agree, people here use LW jargon as as a form of applause light!
LW1: Uh...
LW7: You know, accusing others of using applause lights is a fully generalized counter argument!
LW6: Oh yea? Well fully generalized counter arguments are fully generalized counter arguments themselves, so there!
We’re only at LW3 right now so maybe this conversation can still be saved from becoming typical Less Wrong-style meta screwery. Or to make my point more politely: Please tell me whether or not you think Less Wrong is rational and whether or not something should be done, because that’s the thing we’re actually talking about.
Dude, my post was precisely about how you’re making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here’s the long version, stripped of jargon because I’m cool like that.
The point of the planning fallacy experiments is that we’re bad at estimating the time we’re going to spend on stuff, mainly because we tend to ignore time sinks that aren’t explicitly part of our model. My boss asks me how long I’m going to spend on a task: I can either look at all the subtasks involved and add up the time they’ll take (the inside view), or I can look at similar tasks I’ve done in the past and report how long they took me (the outside view). The latter is going to be larger, and it’s usually going to be more accurate.
That’s a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don’t have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it’s equivalent to saying “this looks to me like a $SCENARIO1”. As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one’s going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain’s centrality heuristic, but crying “outside view” is not one of them.
As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you’re really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it’s a boring question.
Ok, I guess I could have inferred your meaning from your original post, so sorry if my reply was too snarky. But seriously, if that’s your point I would have just made it like this:
“Dude you’re only supposed to use the phrase ‘outside view’ with regards to the planning fallacy, because we don’t know if the technique generalizes well.”
And then I’d go back and change “take a step back and look at it from the outside view” into “take a step back and look at it from an objective point of view” to prevent confusion, and upvote you for taking the time to correct my usage of the phrase.
My guess is that the site is “probably helping people who are trying to improve”, because I would expect some of the materials here to help. I have certainly found a number of materials useful.
But a personal judgement probably helping” isn’t the kind of thing you’d want. It’d be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.
LW8...rationality is more than one thing
My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative.
Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I’ve missed the part where it’s comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory.
And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that’s a unique experience.
http://lesswrong.com/lw/ec2/preventing_discussion_from_being_watered_down_by/, and other articles, I now read, because they are pertinent, and I want to know what sorts of work have been done to figure out how LW is perceived and why.