I’ve been commenting for a few months now, but never introduced myself in the prior Welcome threads. Here goes: Student, electrical engineering / physics (might switch to math this fall), female, DC area.
I encountered LW when I was first linked to Methods a couple years ago, but found the Sequences annoying and unilluminating (after having taken basic psych and stats courses). After meeting a couple of LWers in real life, including my now-boyfriend Roger (LessWrong is almost certainly a significant part of the reason we are dating, incidentally), I was motivated to go back and take a look, and found some things I’d missed: mostly, reductionism and the implications of having an Occam prior. This was surprising to me; after being brought up as an anti-religious nut, then becoming a meta-contrarian in order to rebel against my parents, I thought I had it all figured out, and was surprised to discover that I still had attachments to mysticism and agnosticism that didn’t really make any sense.
My biggest instrumental rationality challenge these days seems to be figuring out what I really want out of life. Also, dealing with an out-of-control status obsession.
To cover some typical LW clusters: I am not signed up for cryonics, and am not entirely convinced it is worth it. And I am interested in studying AI, but mostly because I think it is interesting and not out of Singularity-related concern. (I get the feeling that people who don’t share the prominent belief patterns about AI/cryonics hereabouts think they are much more of a minority than they actually are.)
I’m not quite sure what you’re referring to by “the prominent belief patterns,” but neither low confidence that signing up for cryonics results in life extension, nor low confidence that AI research increases existential risk, are especially uncommon here. That said, high confidence in those things is far more common here than elsewhere.
That is more or less what I am trying to say. It’s just that I’ve noticed several people on Welcome threads saying things like, “Unlike many LessWrongers, I don’t think cryonics is a good idea / am not concerned about AI risk.”
I’ve been commenting for a few months now, but never introduced myself in the prior Welcome threads. Here goes: Student, electrical engineering / physics (might switch to math this fall), female, DC area.
I encountered LW when I was first linked to Methods a couple years ago, but found the Sequences annoying and unilluminating (after having taken basic psych and stats courses). After meeting a couple of LWers in real life, including my now-boyfriend Roger (LessWrong is almost certainly a significant part of the reason we are dating, incidentally), I was motivated to go back and take a look, and found some things I’d missed: mostly, reductionism and the implications of having an Occam prior. This was surprising to me; after being brought up as an anti-religious nut, then becoming a meta-contrarian in order to rebel against my parents, I thought I had it all figured out, and was surprised to discover that I still had attachments to mysticism and agnosticism that didn’t really make any sense.
My biggest instrumental rationality challenge these days seems to be figuring out what I really want out of life. Also, dealing with an out-of-control status obsession.
To cover some typical LW clusters: I am not signed up for cryonics, and am not entirely convinced it is worth it. And I am interested in studying AI, but mostly because I think it is interesting and not out of Singularity-related concern. (I get the feeling that people who don’t share the prominent belief patterns about AI/cryonics hereabouts think they are much more of a minority than they actually are.)
I’m not quite sure what you’re referring to by “the prominent belief patterns,” but neither low confidence that signing up for cryonics results in life extension, nor low confidence that AI research increases existential risk, are especially uncommon here. That said, high confidence in those things is far more common here than elsewhere.
That is more or less what I am trying to say. It’s just that I’ve noticed several people on Welcome threads saying things like, “Unlike many LessWrongers, I don’t think cryonics is a good idea / am not concerned about AI risk.”