1) From the LW user perspective, the way AF is integrated in a way which signals there are two classes of users, where the AF members are something like “the officially approved experts” (specialists, etc.), together with omega badges, special karma, application process, etc. In such setup it is hard to avoid for the status-tracking subsystem which humans generally have to not care about what is “high status”. At the same time: I went through the list of AF users, and it seems much better representation of something which Rohin called “viewpoint X” than the field of AI alignment in general. I would expect some subtle distortion as a result
2) The LW team seem quite keen about e.g. karma, cash prizes on questions, omegas, daily karma updates, and similar technical measures which in S2-centric views bring clear benefits (sorting of comments, credible signalling of interest in questions, creating high-context environment for experts,...). Often these likely have some important effects on S1 motivations / social interactions / etc. I’ve discussed karma and omegas before, creating an environment driven by prizes risks eroding the spirit of cooperativeness and sharing of ideas which is one of virtues of AI safety community, and so on. “Herding elephants with small electric jolts” is a poetic description of effects people’s S1 get from downvotes and strong downvotes.
Sure.
1) From the LW user perspective, the way AF is integrated in a way which signals there are two classes of users, where the AF members are something like “the officially approved experts” (specialists, etc.), together with omega badges, special karma, application process, etc. In such setup it is hard to avoid for the status-tracking subsystem which humans generally have to not care about what is “high status”. At the same time: I went through the list of AF users, and it seems much better representation of something which Rohin called “viewpoint X” than the field of AI alignment in general. I would expect some subtle distortion as a result
2) The LW team seem quite keen about e.g. karma, cash prizes on questions, omegas, daily karma updates, and similar technical measures which in S2-centric views bring clear benefits (sorting of comments, credible signalling of interest in questions, creating high-context environment for experts,...). Often these likely have some important effects on S1 motivations / social interactions / etc. I’ve discussed karma and omegas before, creating an environment driven by prizes risks eroding the spirit of cooperativeness and sharing of ideas which is one of virtues of AI safety community, and so on. “Herding elephants with small electric jolts” is a poetic description of effects people’s S1 get from downvotes and strong downvotes.