If I naively imagine using something close to the 2019 review for alignment (even within a single paradigm), I expect my concerns about “sort by prestige” to be much worse, because there are greater political consequences that one could screw up (and, lack of common knowledge about how large those consequences are and how bad they might be might make everyone too anxious to get buy-in).
I don’t think so.
Your main example for the prestige problem with the LW review was “affordance widths”. I admit that I was one of the people who assigned a lot of negative points to “affordance widths”, and also that I did it not purely on abstract epistemic grounds (in those terms the essay is merely mediocre) but because of the added context about the author. When I voted, the question I was answering was “should this be included in Best of 2018”, including all considerations. If I wasn’t supposed to do this then I’m sorry, I haven’t noticed before.
The main reason I think it would be terrible to include “affordance widths” is not exactly prestige. The argument I used before is prestige-based, but that’s because I expected this part to be more broadly accepted, and wished to avoid the more charged debate I anticipated if I ventured closer to the core. The main reason is, I think it would send a really bad message to women and other vulnerable populations who are interested in LessWrong: not because of the identity of the author, but because the essay was obviously designed to justify the author’s behavior. Some of the reputational ramifications of that would be well-earned (although I also expect the response to be disproportional).
On the other hand, it is hard for me to imagine anything of the sort applying to the Alignment Forum. It would be much more tricky to somehow justify sexual abuse through discussion about AI risk, and if someone accomplished it then surely the AI-alignment-qua-AI-alignment value of that work would be very low. The sort of political considerations that do apply here are not considerations that would affect my vote, and I suspect (although ofc I cannot be sure) the same is true about most other voters.
Also, next time I will adjust my behavior in the LW vote also, since clearly it is against the intent of the organizers. However, I suggest that some process is created in parallel to the main vote, where context-dependent considerations can be brought up, either for public discussion or for the attention of the moderator team specifically.
To be clear, given the vote system we went with (which basically rolled all considerations into a single vote, and ask), I don’t think there was anything wrong with voting against Affordance Widths for that reason.
I saw this more as “the system wasn’t well designed, we should use a better system next time.”
(Different LW team members also had different opinions on what exactly the Review should be doing and why, and some changed their mind over the course of the process, which is part of why some of the messaging was mixed).
The reason I thought (at the time) it was best to “just collapse everything into one vote, which is tied fairly closely to ‘what should be in the book?’” was that if you told people it was about “being honest about good epistemics”, but the result still ended up influencing the book, you’d have something of an asshole filter where some people vote strategically and are disproportionately rewarded.”
I think I may have some conceptual disagreements with your framing, but my current goal for next year is to structure things in a way that separates out truth, usefulness, and broader reputational effects from each other, so that the process is more robust to people coming at it with different goals and frames.
The reason I’m more worried about this for an Alignment Review is that the stakes are higher, and it is not only important that the process be epistemically sound, but for everyone to believe it’s epistemically sound and/or fair. (And meanwhile sexual abuse isn’t the only possible worrisome thing to come up)
I don’t think so.
Your main example for the prestige problem with the LW review was “affordance widths”. I admit that I was one of the people who assigned a lot of negative points to “affordance widths”, and also that I did it not purely on abstract epistemic grounds (in those terms the essay is merely mediocre) but because of the added context about the author. When I voted, the question I was answering was “should this be included in Best of 2018”, including all considerations. If I wasn’t supposed to do this then I’m sorry, I haven’t noticed before.
The main reason I think it would be terrible to include “affordance widths” is not exactly prestige. The argument I used before is prestige-based, but that’s because I expected this part to be more broadly accepted, and wished to avoid the more charged debate I anticipated if I ventured closer to the core. The main reason is, I think it would send a really bad message to women and other vulnerable populations who are interested in LessWrong: not because of the identity of the author, but because the essay was obviously designed to justify the author’s behavior. Some of the reputational ramifications of that would be well-earned (although I also expect the response to be disproportional).
On the other hand, it is hard for me to imagine anything of the sort applying to the Alignment Forum. It would be much more tricky to somehow justify sexual abuse through discussion about AI risk, and if someone accomplished it then surely the AI-alignment-qua-AI-alignment value of that work would be very low. The sort of political considerations that do apply here are not considerations that would affect my vote, and I suspect (although ofc I cannot be sure) the same is true about most other voters.
Also, next time I will adjust my behavior in the LW vote also, since clearly it is against the intent of the organizers. However, I suggest that some process is created in parallel to the main vote, where context-dependent considerations can be brought up, either for public discussion or for the attention of the moderator team specifically.
To be clear, given the vote system we went with (which basically rolled all considerations into a single vote, and ask), I don’t think there was anything wrong with voting against Affordance Widths for that reason.
I saw this more as “the system wasn’t well designed, we should use a better system next time.”
(Different LW team members also had different opinions on what exactly the Review should be doing and why, and some changed their mind over the course of the process, which is part of why some of the messaging was mixed).
The reason I thought (at the time) it was best to “just collapse everything into one vote, which is tied fairly closely to ‘what should be in the book?’” was that if you told people it was about “being honest about good epistemics”, but the result still ended up influencing the book, you’d have something of an asshole filter where some people vote strategically and are disproportionately rewarded.”
I think I may have some conceptual disagreements with your framing, but my current goal for next year is to structure things in a way that separates out truth, usefulness, and broader reputational effects from each other, so that the process is more robust to people coming at it with different goals and frames.
The reason I’m more worried about this for an Alignment Review is that the stakes are higher, and it is not only important that the process be epistemically sound, but for everyone to believe it’s epistemically sound and/or fair. (And meanwhile sexual abuse isn’t the only possible worrisome thing to come up)