As a datapoint—my reasons for mostly not participating in discussion here:
The karma system messes up with my S1 motivations and research taste; I do not want to update toward “LW average taste”—I don’t think LW average taste is that great. Also IMO on the margin it is better for the field to add ppl who are trying to orient themselves in AI alignment independently, in contrast to people guided by “what’s popular on LW”
Commenting seems costly; feels like comments are expected to be written very clearly and reader-friendly, which is time costly
Posting seems super-costly; my impression is many readers are calibrated on quality of writing of Eliezer, Scott & likes, not on informal research conversation
Quality of debate on topics I find interesting is much worse than in person
Not the top reason, but still… System of AF members vs. hoi polloi, omegas, etc. creates some subtle corruption/distortion field. My overall vague impression is the LW team generally tends to like solutions which look theoretically nice, and tends to not see subtler impacts on the elephants. Where my approach would be to try move much of the elephants-playing-status-game out of the way, what’s attempted here sometimes feels a bit like herding elephants with small electric jolts.
Not the top reason, but still… System of AF members vs. hoi polloi, omegas, etc. creates some subtle corruption/distortion field. My overall vague impression is the LW team generally tends to like solutions which look theoretically nice, and tends to not see subtler impacts on the elephants. Where my approach would be to try move much of the elephants-playing-status-game out of the way, what’s attempted here sometimes feels a bit like herding elephants with small electric jolts
I’m not sure I understand this part, can you try restating the concern in different words?
1) From the LW user perspective, the way AF is integrated in a way which signals there are two classes of users, where the AF members are something like “the officially approved experts” (specialists, etc.), together with omega badges, special karma, application process, etc. In such setup it is hard to avoid for the status-tracking subsystem which humans generally have to not care about what is “high status”. At the same time: I went through the list of AF users, and it seems much better representation of something which Rohin called “viewpoint X” than the field of AI alignment in general. I would expect some subtle distortion as a result
2) The LW team seem quite keen about e.g. karma, cash prizes on questions, omegas, daily karma updates, and similar technical measures which in S2-centric views bring clear benefits (sorting of comments, credible signalling of interest in questions, creating high-context environment for experts,...). Often these likely have some important effects on S1 motivations / social interactions / etc. I’ve discussed karma and omegas before, creating an environment driven by prizes risks eroding the spirit of cooperativeness and sharing of ideas which is one of virtues of AI safety community, and so on. “Herding elephants with small electric jolts” is a poetic description of effects people’s S1 get from downvotes and strong downvotes.
As a datapoint—my reasons for mostly not participating in discussion here:
The karma system messes up with my S1 motivations and research taste; I do not want to update toward “LW average taste”—I don’t think LW average taste is that great. Also IMO on the margin it is better for the field to add ppl who are trying to orient themselves in AI alignment independently, in contrast to people guided by “what’s popular on LW”
Commenting seems costly; feels like comments are expected to be written very clearly and reader-friendly, which is time costly
Posting seems super-costly; my impression is many readers are calibrated on quality of writing of Eliezer, Scott & likes, not on informal research conversation
Quality of debate on topics I find interesting is much worse than in person
Not the top reason, but still… System of AF members vs. hoi polloi, omegas, etc. creates some subtle corruption/distortion field. My overall vague impression is the LW team generally tends to like solutions which look theoretically nice, and tends to not see subtler impacts on the elephants. Where my approach would be to try move much of the elephants-playing-status-game out of the way, what’s attempted here sometimes feels a bit like herding elephants with small electric jolts.
I’m not sure I understand this part, can you try restating the concern in different words?
Sure.
1) From the LW user perspective, the way AF is integrated in a way which signals there are two classes of users, where the AF members are something like “the officially approved experts” (specialists, etc.), together with omega badges, special karma, application process, etc. In such setup it is hard to avoid for the status-tracking subsystem which humans generally have to not care about what is “high status”. At the same time: I went through the list of AF users, and it seems much better representation of something which Rohin called “viewpoint X” than the field of AI alignment in general. I would expect some subtle distortion as a result
2) The LW team seem quite keen about e.g. karma, cash prizes on questions, omegas, daily karma updates, and similar technical measures which in S2-centric views bring clear benefits (sorting of comments, credible signalling of interest in questions, creating high-context environment for experts,...). Often these likely have some important effects on S1 motivations / social interactions / etc. I’ve discussed karma and omegas before, creating an environment driven by prizes risks eroding the spirit of cooperativeness and sharing of ideas which is one of virtues of AI safety community, and so on. “Herding elephants with small electric jolts” is a poetic description of effects people’s S1 get from downvotes and strong downvotes.