The general AI x-risk community on LW has pretty bad takes overall and also seems to engage in pretty sloppy stuff. Both sloppy ML research (by the standards of normal ML research) and sloppy reasoning.
There are few things that people seem as badly calibrated on than “the beliefs of the general LW community”. Mostly people cherry pick random low karma people they disagree with if they want to present it in a bad light, or cherry pick the people they work with every day if they want to present it in a good light.
You yourself are among the most active commenters in the “AI x-risk community on LW”. It seems very weird to ascribe a generic “bad takes overall” summary to that group, given that you yourself are directly part of it.
Seems fine for people to use whatever identifiers they want for a conversation like this, and I am not going to stop it, but the above sentences seemed like pretty confused generalizations.
You yourself are among the most active commenters in the “AI x-risk community on LW”.
Yeah, lol, I should maybe be commenting less.
It seems very weird to ascribe a generic “bad takes overall” summary to that group, given that you yourself are directly part of it.
I mean, I wouldn’t really want to identify as part of “the AI x-risk community on LW” in the same way I expect you wouldn’t want to identify as “an EA” despite relatively often doing thing heavily associated with EAs (e.g., posting on the EA forum).
I would broadly prefer people don’t use labels which place me in particular in any community/group that I seem vaguely associated with an I generally try to extend the same to other people (note that I’m talking about some claim about the aggregate attention of LW, not necessarily any specific person).
I mean, I wouldn’t really want to identify as part of “the AI x-risk community on LW” in the same way I expect you wouldn’t want to identify as “an EA” despite relatively often doing thing heavily associated with EAs (e.g., posting on the EA forum).
Yeah, to be clear, that was like half of my point. A very small fraction of top contributors identify as part of a coherent community. Trying to summarize their takes as if they did is likely to end up confused.
LW is very intentionally designed and shaped so that you don’t need to have substantial social ties or need to become part of a community to contribute (and I’ve made many pretty harsh tradeoffs in that direction over the years).
In as much as some people do, I don’t think it makes sense to give their beliefs outsized weight when trying to think about LW’s role as a discourse platform. The vast majority of top contributors are similarly allergic to labels as you are.
It seems very weird to ascribe a generic “bad takes overall” summary to that group, given that you yourself are directly part of it.
This sentence channels influence of an evaporative cooling norm (upon observing bad takes, either leave the group or conspicuously ignore the bad takes), also places weight on acting on the basis of one’s identity. (I’m guessing this is not in tune with your overall stance, but it’s evidence of presence of a generator for the idea.)
Makes sense. I think generalizing from “what gets karma on LW” to “what do the people thinking most about AI X-risk on LW is important” is pretty fraught (especially at the upper end karma is mostly a broad popularity measure).
I think using the results of the annual review is a lot better, and IMO the top alignment posts in past reviews have mostly pretty good takes in them (my guess is also by your lights), and the ones that don’t have reviews poking at the problems pretty well. My guess is you would still have lots of issues with posts scoring highly in the review, but I would be surprised if you would summarize the aggregate as “pretty bad takes”.
There are few things that people seem as badly calibrated on than “the beliefs of the general LW community”. Mostly people cherry pick random low karma people they disagree with if they want to present it in a bad light, or cherry pick the people they work with every day if they want to present it in a good light.
You yourself are among the most active commenters in the “AI x-risk community on LW”. It seems very weird to ascribe a generic “bad takes overall” summary to that group, given that you yourself are directly part of it.
Seems fine for people to use whatever identifiers they want for a conversation like this, and I am not going to stop it, but the above sentences seemed like pretty confused generalizations.
Yeah, lol, I should maybe be commenting less.
I mean, I wouldn’t really want to identify as part of “the AI x-risk community on LW” in the same way I expect you wouldn’t want to identify as “an EA” despite relatively often doing thing heavily associated with EAs (e.g., posting on the EA forum).
I would broadly prefer people don’t use labels which place me in particular in any community/group that I seem vaguely associated with an I generally try to extend the same to other people (note that I’m talking about some claim about the aggregate attention of LW, not necessarily any specific person).
Yeah, to be clear, that was like half of my point. A very small fraction of top contributors identify as part of a coherent community. Trying to summarize their takes as if they did is likely to end up confused.
LW is very intentionally designed and shaped so that you don’t need to have substantial social ties or need to become part of a community to contribute (and I’ve made many pretty harsh tradeoffs in that direction over the years).
In as much as some people do, I don’t think it makes sense to give their beliefs outsized weight when trying to think about LW’s role as a discourse platform. The vast majority of top contributors are similarly allergic to labels as you are.
This sentence channels influence of an evaporative cooling norm (upon observing bad takes, either leave the group or conspicuously ignore the bad takes), also places weight on acting on the basis of one’s identity. (I’m guessing this is not in tune with your overall stance, but it’s evidence of presence of a generator for the idea.)
I was just refering to “what gets karma on LW”. Obviously, unclear how much we should care.
Makes sense. I think generalizing from “what gets karma on LW” to “what do the people thinking most about AI X-risk on LW is important” is pretty fraught (especially at the upper end karma is mostly a broad popularity measure).
I think using the results of the annual review is a lot better, and IMO the top alignment posts in past reviews have mostly pretty good takes in them (my guess is also by your lights), and the ones that don’t have reviews poking at the problems pretty well. My guess is you would still have lots of issues with posts scoring highly in the review, but I would be surprised if you would summarize the aggregate as “pretty bad takes”.