Actually, I’m something of a partial data point against that as well. I did come here with the split from OB but I was just a causal reader, had only been there a few weeks and never commented.
I did go back and look at some of your early comments and my initial reaction is that you seem unusually well read and instinctively rational even for this crowd. In fact, I wonder if you should be asking me questions about how to make Less Wrong more amenable to people with limited background knowledge.
you can ask a person questions. ;)
You may regret this. I’m a very curious person.
To what extent was it obvious upon coming here that Less Wrong had a kind of affinity with computer science and programming? What effect did this affinity have on your interest? How much of your interest in participating was driven by Eliezer’s writings in particular compared to the community in general. Should the barrier to participation be lowered? If so how would you do it? What would have gotten you up to speed with everyone else faster? What would have made it easier? To what extent did you/do you now associate yourself with transhumanism? Did that factor into your interest in Less Wrong?
I could probably keep going. One more for now: What questions should I be asking you that I haven’t?
What questions should I be asking you that I haven’t?
That’s one question I really like. Originally learnt it from Jerry Weinberg as one of the “context free questions”, a very useful item in my toolkit.
What I’d ask people is “What motivated you to come here in the first place?” Where “here” takes a range of values—what motivated them to become readers; then to create a profile here and become commenters; then to start contributing.
To what extent was it obvious upon coming here that Less Wrong had a kind of affinity with computer science and programming? What effect did this affinity have on your interest?
Not very. What “hooked” me first, as a huge Dennett fan, was the Zombies post, which I came across while browsing random links from my Twitter feed. That led me to the QM sequence, which made sense for me of things that hadn’t made sense before, which motivated me to drill for more. That led me to the Bayes article. Parallel exploration turned up the FAI argument, which (I don’t dare use the word “click” yet) made intuitive sense even though it hadn’t crossed my mind before.
It was only then that I made the connection with CS/programming—I had this fantasy of getting my colleagues to invite Eliezer to keynote at our conference. Interestingly enough, the response I got from musing about that on Twitter was (direct quote) “the singularity will definitely have personality disorders”.
How much of your interest in participating was driven by Eliezer’s writings in particular compared to the community in general?
Well, I’d taken note that there was such a thing as the LW community blog, and I kept an eye on it, but in parallel I started reading all of the back-content of LW, all the posts by Eliezer ported over from OB. I wanted to catch up before increasing my participation. So initially I pretty much ignored the community, which anyway I couldn’t quite figure out.
What would have gotten you up to speed with everyone else faster? What would have made it easier?
I wish someone had told me, quite plainly, what I was expected to do! Something along the lines of, “this is a rationality dojo, posts are where intermediate students show off their moves, comments are where beginners learn from intermediates and demonstrate their abilities, you will be given cues by people ahead of you when you are ready to move along the path reader->commenter->poster”.
Looking back, I can see some mistakes made in the way this community is set up that tend to put it at odds with its stated mission; and I’m not at all sure I’d have done any better, given what people knew pre-launch. And figuring out how to participate was also part of the learning process, consistent with (for instance) Lave and Wenger’s notions on “legitimate peripheral particiation”.
I’m guessing that this process could be improved by thinking more explicitly about this kind of theoretical frameworks, when thinking about what this community is aiming to achieve and how to achieve it. I’ve done a lot of this kind of thinking in my “secret identity”, with some successes.
To what extent did you/do you now associate yourself with transhumanism? Did that factor into your interest in Less Wrong?
No, I’ve been vaguely aware of transhumanist ideas and values for some time, but never explicitly identified as singularitarian, transhumanist, extropian or anything of the sort. I have most of the background reading that seems to be common in these circles (from a very uninformed outsider’s perspective) but I guess I never was in the right place at the right time to become an insider. It feels as if I might have been.
LessWrong is missing “profile pages” of some kind, where the sort of biographical information that we’re discussing could be collected for later reference. Posting a comment to the “Welcome” thread doesn’t really cut it.
I wish someone had told me, quite plainly, what I was expected to do! Something along the lines of, “this is a rationality dojo...”
Indeed—the reason we don’t say that explicitly is that it’s unclear how much this is the case. However, if it were possible for Lw to become a “rationality dojo”, I think most of us would leap on the opportunity.
There is some previous discussion which suggests that not everyone here would be happy to see LW as a “rationality dojo”.
The term “dojo” has favorable connotations for me, partly because one of my secret identity’s modest claims to fame is as a co-originator of the “Coding Dojo”, an attempt to bring a measure of sanity back to the IT industry’s horrible pedagogy and hiring practices.
However these connotations might be biasing my thinking about whether using the “dojo” metaphor as a guide to direct the evolution of LW would be for good or ill on balance.
How about starting a discussion at the top of the current Open Thread to ask people what they now think of applying the Dojo metaphor to LW?
I think I’m the only one on that thread who explicitly advised against starting a rationality dojo, and the other concerns were mostly whether it was possible.
Indeed. Eliezer’s post itself, however, seemed mostly to caution against it, and perhaps what he took away from the subsequent discussion, after weighing the various contributions, was that it had too little to recommend it. At any rate, that I’m aware, the question wasn’t raised again?
Of course one issue is that it was never clarified what “it” might be, i.e. what would result from treating LW more explicitly as a “rationality dojo” (that would be different from what it is at present).
Actually, I’m something of a partial data point against that as well. I did come here with the split from OB but I was just a causal reader, had only been there a few weeks and never commented.
I did go back and look at some of your early comments and my initial reaction is that you seem unusually well read and instinctively rational even for this crowd. In fact, I wonder if you should be asking me questions about how to make Less Wrong more amenable to people with limited background knowledge.
You may regret this. I’m a very curious person.
To what extent was it obvious upon coming here that Less Wrong had a kind of affinity with computer science and programming? What effect did this affinity have on your interest? How much of your interest in participating was driven by Eliezer’s writings in particular compared to the community in general. Should the barrier to participation be lowered? If so how would you do it? What would have gotten you up to speed with everyone else faster? What would have made it easier? To what extent did you/do you now associate yourself with transhumanism? Did that factor into your interest in Less Wrong?
I could probably keep going. One more for now: What questions should I be asking you that I haven’t?
That’s one question I really like. Originally learnt it from Jerry Weinberg as one of the “context free questions”, a very useful item in my toolkit.
What I’d ask people is “What motivated you to come here in the first place?” Where “here” takes a range of values—what motivated them to become readers; then to create a profile here and become commenters; then to start contributing.
Not very. What “hooked” me first, as a huge Dennett fan, was the Zombies post, which I came across while browsing random links from my Twitter feed. That led me to the QM sequence, which made sense for me of things that hadn’t made sense before, which motivated me to drill for more. That led me to the Bayes article. Parallel exploration turned up the FAI argument, which (I don’t dare use the word “click” yet) made intuitive sense even though it hadn’t crossed my mind before.
It was only then that I made the connection with CS/programming—I had this fantasy of getting my colleagues to invite Eliezer to keynote at our conference. Interestingly enough, the response I got from musing about that on Twitter was (direct quote) “the singularity will definitely have personality disorders”.
Well, I’d taken note that there was such a thing as the LW community blog, and I kept an eye on it, but in parallel I started reading all of the back-content of LW, all the posts by Eliezer ported over from OB. I wanted to catch up before increasing my participation. So initially I pretty much ignored the community, which anyway I couldn’t quite figure out.
I wish someone had told me, quite plainly, what I was expected to do! Something along the lines of, “this is a rationality dojo, posts are where intermediate students show off their moves, comments are where beginners learn from intermediates and demonstrate their abilities, you will be given cues by people ahead of you when you are ready to move along the path reader->commenter->poster”.
Looking back, I can see some mistakes made in the way this community is set up that tend to put it at odds with its stated mission; and I’m not at all sure I’d have done any better, given what people knew pre-launch. And figuring out how to participate was also part of the learning process, consistent with (for instance) Lave and Wenger’s notions on “legitimate peripheral particiation”.
I’m guessing that this process could be improved by thinking more explicitly about this kind of theoretical frameworks, when thinking about what this community is aiming to achieve and how to achieve it. I’ve done a lot of this kind of thinking in my “secret identity”, with some successes.
No, I’ve been vaguely aware of transhumanist ideas and values for some time, but never explicitly identified as singularitarian, transhumanist, extropian or anything of the sort. I have most of the background reading that seems to be common in these circles (from a very uninformed outsider’s perspective) but I guess I never was in the right place at the right time to become an insider. It feels as if I might have been.
LessWrong is missing “profile pages” of some kind, where the sort of biographical information that we’re discussing could be collected for later reference. Posting a comment to the “Welcome” thread doesn’t really cut it.
There is a wiki, though it sadly uses a different authentication system. Nonetheless, many users do have profile pages there.
I’m going to remember that one.
Indeed—the reason we don’t say that explicitly is that it’s unclear how much this is the case. However, if it were possible for Lw to become a “rationality dojo”, I think most of us would leap on the opportunity.
There is some previous discussion which suggests that not everyone here would be happy to see LW as a “rationality dojo”.
The term “dojo” has favorable connotations for me, partly because one of my secret identity’s modest claims to fame is as a co-originator of the “Coding Dojo”, an attempt to bring a measure of sanity back to the IT industry’s horrible pedagogy and hiring practices.
However these connotations might be biasing my thinking about whether using the “dojo” metaphor as a guide to direct the evolution of LW would be for good or ill on balance.
How about starting a discussion at the top of the current Open Thread to ask people what they now think of applying the Dojo metaphor to LW?
I think I’m the only one on that thread who explicitly advised against starting a rationality dojo, and the other concerns were mostly whether it was possible.
Indeed. Eliezer’s post itself, however, seemed mostly to caution against it, and perhaps what he took away from the subsequent discussion, after weighing the various contributions, was that it had too little to recommend it. At any rate, that I’m aware, the question wasn’t raised again?
Of course one issue is that it was never clarified what “it” might be, i.e. what would result from treating LW more explicitly as a “rationality dojo” (that would be different from what it is at present).