LessWrong.com is my favorite website. I’ve tried having thoughts on other websites and it didn’t work. Seriously, though—I feel very grateful for the effort you all have put in to making this an epistemically sane environment. I have personally benefited a huge amount from the intellectual output of LW—I feel smarter, saner, and more capable of positively affecting the world, not to mention all of the gears-level knowledge I’ve learned, and model building I’ve done as a result, which has really been a lot of fun :) And when I think about what the world would look like without LessWrong.com I mostly just shudder and then regret thinking of such dismal worlds.
Some other thoughts of varying import:
I dislike emojis. They feel like visual clutter to me. I also feel somewhat assaulted when I read through comments sometimes, as people’s opinions jump out at me before I’ve had much chance to form my own.
I like dialogues a lot more than I was expecting. What I expected was something like “people will spend a bunch of time talking past each other in their own mentalese with little effort towards making the reader capable of understanding and it’ll feel cluttered and way too long and hard to make much sense of.” I think this does sometimes happen. But I’ve also been pleasantly surprised by the upsides which I was not anticipating—seeing more surface area on people’s thoughts which helps me make sense of their “deal” in a way that’s useful for modeling their other views, (relatedly) getting a better sense of how people generate thoughts, where their intuitions are coming from, and so on. It also makes LW feel more homey, in my opinion.
If there were one dial I’d want to experiment with turning on LW it would be writing quality, in the direction of more of it. I don’t feel like I have super great ideas on how to cultivate this, but I’ll just relay the sort of experience that makes me say this. Sometimes I want to understand something someone has said. I think “ah, they probably said that there,” and then I go to a post, skim it, find the sort-of-related thing but it’s not quite right (they talk around the point without really saying it, or it’s not very clear, etc). But they link to ten other posts of theirs, all promising to tell me the thing I think they said, so I follow those links, but they’re also a bit slippery in the same ways. And I feel like I go in circles trying to pin down exactly what the claims are, never quite succeeding, until I feel like throwing up my hands in defeat. To some extent this seems like just par for the course with highly intellectually productive people—ideas outpace idea management and legibility, and in the absence of having the sort of streamlined clarity that I’m more used to seeing in, e.g., books, I would on the margin prefer they still publish. But I do think this sort of thing can make it harder to push the frontier of human knowledge together, and if I did have a dial I could turn to make writing quality better (clearer, more succinct, more linear, etc.), even at the expense of somewhat fewer posts, I’d at least want to try that for a bit.
Something has long bothered me about how people talk about “p(doom)” around here. Like, here’s an experience I have regularly: I tell someone I am hoping to take an action in the future, they say “haha, what future? we’ll all be dead by then!” I really dislike this, not because I don’t agree that we’re facing serious risks, or that it’s never okay to joke about that, but more that I often don’t believe them. It seems to me that in many conversations high p(doom) is closer to type “meme” than “belief,” like a badge people wear to fit into the social fabric.
But also, it feeds into this general vibe of nihilistic hopelessness that the Bay Area rationality scene has lapsed into, according to me, which I worry stems in part from deferring to Eliezer’s/Nate’s hopelessness. And I don’t know, if you really are on-model hopeless I guess that’s all well and good, but on a gut level I just don’t really buy that this makes sense. Alignment seems like a hard science problem but not an impossible one, and I think that if we actually try, we may very well have a good shot at figuring it out. But at present it feels to me like so few people are trying to solve the hard parts of the problem—that so much work has gone meta (e.g., community building, power for the sake of power, deferring the “solving it” part to uploads or AI); that even though people concede there’s some chance things go well, that in their gut they basically just have some vague sense of “we’re fucked” which inhibits them from actually trying; that somehow our focus has become about managing tenth order effects of the social graph, the “well what if this faction does this, then people will update this way and then we’ll lose influence over there”… I don’t know, it just sort of feels like we’ve communally lost the spirit of something that seems really powerful to me—something that I took away from the Sequences—a sense of agency, ambition, truth-seeking, and integrity in the face of hard problems. A sense that we can… solve this. Like actually solve the actual problem! I would like that spirit back.
I’m not sure how to get it, exactly, and I don’t know that this is aimed at the LW team in particular rather than being nebulously aimed at “Bay Area rats” or something. But just to add one small piece that I think LW could work on: I’ve occasionally seen the mods slip from “I think we are doomed” language to “we’re doomed” language. I’ve considered bringing it up although for any particular instance it feels a bit too aggressive relative to the slight, and because I get that it’s annoying to append your epistemic state to everything, and so on. But I do think that on this topic in particular it’s good to be careful, as it’s one of the most crazy-making aspects of this situation, and one that seems especially easy to spiral into group-think-y/deferral-y dynamics about.
I feel sad about ending on a bad note, mostly because I feel sad that so many people seem to be dunking on MIRI/rationality/LW lately. And I have some kind of “can we please not throw the baby out with the bathwater” sense. I certainly have some gripes with the community, but on net I am really happy that it exists. And I continue to believe that the spirit of rationality is worth fighting for—both because it’s beautiful for its own sake, but also because I believe in its ability to positively shape our lightcone. I see LW as part of that mission, and I feel deeply grateful for it.
LessWrong.com is my favorite website. I’ve tried having thoughts on other websites and it didn’t work. Seriously, though—I feel very grateful for the effort you all have put in to making this an epistemically sane environment. I have personally benefited a huge amount from the intellectual output of LW—I feel smarter, saner, and more capable of positively affecting the world, not to mention all of the gears-level knowledge I’ve learned, and model building I’ve done as a result, which has really been a lot of fun :) And when I think about what the world would look like without LessWrong.com I mostly just shudder and then regret thinking of such dismal worlds.
Some other thoughts of varying import:
I dislike emojis. They feel like visual clutter to me. I also feel somewhat assaulted when I read through comments sometimes, as people’s opinions jump out at me before I’ve had much chance to form my own.
I like dialogues a lot more than I was expecting. What I expected was something like “people will spend a bunch of time talking past each other in their own mentalese with little effort towards making the reader capable of understanding and it’ll feel cluttered and way too long and hard to make much sense of.” I think this does sometimes happen. But I’ve also been pleasantly surprised by the upsides which I was not anticipating—seeing more surface area on people’s thoughts which helps me make sense of their “deal” in a way that’s useful for modeling their other views, (relatedly) getting a better sense of how people generate thoughts, where their intuitions are coming from, and so on. It also makes LW feel more homey, in my opinion.
If there were one dial I’d want to experiment with turning on LW it would be writing quality, in the direction of more of it. I don’t feel like I have super great ideas on how to cultivate this, but I’ll just relay the sort of experience that makes me say this. Sometimes I want to understand something someone has said. I think “ah, they probably said that there,” and then I go to a post, skim it, find the sort-of-related thing but it’s not quite right (they talk around the point without really saying it, or it’s not very clear, etc). But they link to ten other posts of theirs, all promising to tell me the thing I think they said, so I follow those links, but they’re also a bit slippery in the same ways. And I feel like I go in circles trying to pin down exactly what the claims are, never quite succeeding, until I feel like throwing up my hands in defeat. To some extent this seems like just par for the course with highly intellectually productive people—ideas outpace idea management and legibility, and in the absence of having the sort of streamlined clarity that I’m more used to seeing in, e.g., books, I would on the margin prefer they still publish. But I do think this sort of thing can make it harder to push the frontier of human knowledge together, and if I did have a dial I could turn to make writing quality better (clearer, more succinct, more linear, etc.), even at the expense of somewhat fewer posts, I’d at least want to try that for a bit.
Something has long bothered me about how people talk about “p(doom)” around here. Like, here’s an experience I have regularly: I tell someone I am hoping to take an action in the future, they say “haha, what future? we’ll all be dead by then!” I really dislike this, not because I don’t agree that we’re facing serious risks, or that it’s never okay to joke about that, but more that I often don’t believe them. It seems to me that in many conversations high p(doom) is closer to type “meme” than “belief,” like a badge people wear to fit into the social fabric.
But also, it feeds into this general vibe of nihilistic hopelessness that the Bay Area rationality scene has lapsed into, according to me, which I worry stems in part from deferring to Eliezer’s/Nate’s hopelessness. And I don’t know, if you really are on-model hopeless I guess that’s all well and good, but on a gut level I just don’t really buy that this makes sense. Alignment seems like a hard science problem but not an impossible one, and I think that if we actually try, we may very well have a good shot at figuring it out. But at present it feels to me like so few people are trying to solve the hard parts of the problem—that so much work has gone meta (e.g., community building, power for the sake of power, deferring the “solving it” part to uploads or AI); that even though people concede there’s some chance things go well, that in their gut they basically just have some vague sense of “we’re fucked” which inhibits them from actually trying; that somehow our focus has become about managing tenth order effects of the social graph, the “well what if this faction does this, then people will update this way and then we’ll lose influence over there”… I don’t know, it just sort of feels like we’ve communally lost the spirit of something that seems really powerful to me—something that I took away from the Sequences—a sense of agency, ambition, truth-seeking, and integrity in the face of hard problems. A sense that we can… solve this. Like actually solve the actual problem! I would like that spirit back.
I’m not sure how to get it, exactly, and I don’t know that this is aimed at the LW team in particular rather than being nebulously aimed at “Bay Area rats” or something. But just to add one small piece that I think LW could work on: I’ve occasionally seen the mods slip from “I think we are doomed” language to “we’re doomed” language. I’ve considered bringing it up although for any particular instance it feels a bit too aggressive relative to the slight, and because I get that it’s annoying to append your epistemic state to everything, and so on. But I do think that on this topic in particular it’s good to be careful, as it’s one of the most crazy-making aspects of this situation, and one that seems especially easy to spiral into group-think-y/deferral-y dynamics about.
I feel sad about ending on a bad note, mostly because I feel sad that so many people seem to be dunking on MIRI/rationality/LW lately. And I have some kind of “can we please not throw the baby out with the bathwater” sense. I certainly have some gripes with the community, but on net I am really happy that it exists. And I continue to believe that the spirit of rationality is worth fighting for—both because it’s beautiful for its own sake, but also because I believe in its ability to positively shape our lightcone. I see LW as part of that mission, and I feel deeply grateful for it.
I’d like to highlight this. In general, I think fewer things should be promoted to the front page.
[edit, several days later]: https://www.lesswrong.com/posts/SiPX84DAeNKGZEfr5/do-websites-and-apps-actually-generally-get-worse-after is a prime example. This has nothing to do with rationality or AI alignment. This is the sort of off-topic chatter that belongs somewhere else on the Internet.
[edit, almost a year later]: https://www.lesswrong.com/posts/dfKTbyzQSrpcWnxfC/2025-color-trends is an even better example of off-topic cross-posting that the author should not be rewarded for doing.