There was never a point in the past ~nine years of me knowing about it when I viewed as lesswrong as anything but the place the odd, annoying, mostly wrong ai safety people went, having participated with the community around it for about that long, mostly without reading much here. Eventually I decided I wanted to talk to them more. I generally think of lesswrong as a place to talk at people who won’t listen but who need to hear an other perspective—and I say that agreeing that the world is in dire straits and needs saving from superhuman agency (which I think is currently concentrated in human organizations that people here consistently underestimate). I see it as a scientific forum of middling quality that is related to the research topic I care about. I occasionally find something I like on it, try to share it, and get blowback from one group of friends that I’m browsing that site again. The upvote mechanism seems to vigorously promote groupthink, especially with the harsh effects of downvotes on newbies. I do think of it as one of the few spaces online that is consistently a conversation ground between progressive classical liberals and conservative classical liberals, so that’s nice, I guess.
nope! middling quality is the best I know of. the things that populate my mental quality space in order to give lesswrong the feeling of being middling quality are things that, despite being lower quality on their own, make me think LW is missing things in order to hit what I would consider “high quality on all dimensions”. I’ve been pretty happy with certain kinds of discussions on eg bluesky, and I think there are elements of discourse there that are good that are missing here, but the nuggets of good there are smaller and not part of an overall project of epistemics, but rather just people having interesting takes. There are also some mastodons I’ve occasionally seen good takes from, but I don’t go deep on mastodon. I have a lot of frustrations with LW that make me feel that people here are missing important insights, and I am working on how to distill and present them. but nothing that is as distilled good as lesswrong; I just still find that distilled good to be painfully lacking on certain traits.
to be clear, as I’ve become an ai safety person over the past 5 years, it didn’t change my view that most ai safety people are odd, annoying, and mostly wrong.
More later, thanks for poking me to get my ass in gear about communicating the things I’m being vague about in this comment. My ass is pretty far out of gear, so don’t expect this too soon.
There was never a point in the past ~nine years of me knowing about it when I viewed as lesswrong as anything but the place the odd, annoying, mostly wrong ai safety people went, having participated with the community around it for about that long, mostly without reading much here. Eventually I decided I wanted to talk to them more. I generally think of lesswrong as a place to talk at people who won’t listen but who need to hear an other perspective—and I say that agreeing that the world is in dire straits and needs saving from superhuman agency (which I think is currently concentrated in human organizations that people here consistently underestimate). I see it as a scientific forum of middling quality that is related to the research topic I care about. I occasionally find something I like on it, try to share it, and get blowback from one group of friends that I’m browsing that site again. The upvote mechanism seems to vigorously promote groupthink, especially with the harsh effects of downvotes on newbies. I do think of it as one of the few spaces online that is consistently a conversation ground between progressive classical liberals and conservative classical liberals, so that’s nice, I guess.
Got any better forums to point me to? I’ll take a look and decide for myself how they compare to LessWrong.
nope! middling quality is the best I know of. the things that populate my mental quality space in order to give lesswrong the feeling of being middling quality are things that, despite being lower quality on their own, make me think LW is missing things in order to hit what I would consider “high quality on all dimensions”. I’ve been pretty happy with certain kinds of discussions on eg bluesky, and I think there are elements of discourse there that are good that are missing here, but the nuggets of good there are smaller and not part of an overall project of epistemics, but rather just people having interesting takes. There are also some mastodons I’ve occasionally seen good takes from, but I don’t go deep on mastodon. I have a lot of frustrations with LW that make me feel that people here are missing important insights, and I am working on how to distill and present them. but nothing that is as distilled good as lesswrong; I just still find that distilled good to be painfully lacking on certain traits.
to be clear, as I’ve become an ai safety person over the past 5 years, it didn’t change my view that most ai safety people are odd, annoying, and mostly wrong.
More later, thanks for poking me to get my ass in gear about communicating the things I’m being vague about in this comment. My ass is pretty far out of gear, so don’t expect this too soon.
OK, I’ll check out bluesky, thanks.
it’s basically just twitter minus elon vibes, so it’s not exactly highly novel, fwiw. I have invite codes if you decide you do want them.