When I think about where rationalists came from, my answer is 1) EY writing the original sequences, and 2) EY writing HPMOR. It feels like those things happened, tons of people joined, and then they stopped happening, and people stopped joining.
Is this really the case? I’m eighteen, and I know a few people here in my age group.
I also didn’t take either of the membership routes you listed. When I was 16.75 (±0.33) years old, I happened upon a Slate Star Codex post (might’ve been this one). I thought “Wow, this blog is great!” so I then proceeded to read all of the SSC backcatalog[1]. Once I ran out of posts, I saw “LessWrong” under SSC’s “blogroll” header, and the rest is history. I didn’t systematically read the sequences[2], but instead just read whatever looked interesting on the frontpage. I had previously read Superintelligence, Thinking Fast and Slow, and Surely You’re Joking, Mr. Feynman, so I already had some kind of background exposure to the ideas discussed here.
Don’t over-index on this particular answer being refutation of your hypothesis!
I came to LessWrong via HPMOR, and I’ve thought in the same vein myself (if HPMOR/equivalent = more incoming rationalists, no HPMOR/equivalent = …less incoming rationalists?).
My experience was similar. I am a little older (early 30′s). Stumbled into a random lesswrong article when looking for something specific online. Then got really into the site just from “homepage shooting”. HPMOR came quite late in the day for me. Like you only half read “the sequences”. (Not anything as sensible as “the first half”, just turned them into swiss cheese from my homepage shooting).
I’m 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19.
The generational shift is a concern to me because as we start losing people who’ve accumulated decades of knowledge (of which only a small fraction is available to read/watch), it’s possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there’s a lot of utility in coming up with ideas from the ground up, but there comes a time when you accept and build upon an existing framework based on true statements. Regardless of whether the timelines are shorter than what we expect, this is a cause for concern.
Is this really the case? I’m eighteen, and I know a few people here in my age group.
I also didn’t take either of the membership routes you listed. When I was 16.75 (±0.33) years old, I happened upon a Slate Star Codex post (might’ve been this one). I thought “Wow, this blog is great!” so I then proceeded to read all of the SSC backcatalog[1]. Once I ran out of posts, I saw “LessWrong” under SSC’s “blogroll” header, and the rest is history. I didn’t systematically read the sequences[2], but instead just read whatever looked interesting on the frontpage. I had previously read Superintelligence, Thinking Fast and Slow, and Surely You’re Joking, Mr. Feynman, so I already had some kind of background exposure to the ideas discussed here.
I was trying to curb my Reddit addiction, so I used SSC, Hacker News (and later, LW) as substitutes. Still do.
And I never have. Did read HPMoR, though.
I’ll be thrilled to find out that my premise is wrong!
Don’t over-index on this particular answer being refutation of your hypothesis!
I came to LessWrong via HPMOR, and I’ve thought in the same vein myself (if HPMOR/equivalent = more incoming rationalists, no HPMOR/equivalent = …less incoming rationalists?).
My experience was similar. I am a little older (early 30′s). Stumbled into a random lesswrong article when looking for something specific online. Then got really into the site just from “homepage shooting”. HPMOR came quite late in the day for me. Like you only half read “the sequences”. (Not anything as sensible as “the first half”, just turned them into swiss cheese from my homepage shooting).
I’m 22, and I came from SSC as well, but my intuition is that most people here are older than me.
I’m 22 (±0.35) years old and have been seriously getting involved with AI-Safety over the last few months. However, I chanced upon LW via SSC a few years ago (directed to SSC by Guzey) when I was 19.
The generational shift is a concern to me because as we start losing people who’ve accumulated decades of knowledge (of which only a small fraction is available to read/watch), it’s possible that a lot of time would be wasted on developing ideas which have been developed via routes which have been explored. Of course, there’s a lot of utility in coming up with ideas from the ground up, but there comes a time when you accept and build upon an existing framework based on true statements. Regardless of whether the timelines are shorter than what we expect, this is a cause for concern.