To be clear, given the vote system we went with (which basically rolled all considerations into a single vote, and ask), I don’t think there was anything wrong with voting against Affordance Widths for that reason.
I saw this more as “the system wasn’t well designed, we should use a better system next time.”
(Different LW team members also had different opinions on what exactly the Review should be doing and why, and some changed their mind over the course of the process, which is part of why some of the messaging was mixed).
The reason I thought (at the time) it was best to “just collapse everything into one vote, which is tied fairly closely to ‘what should be in the book?’” was that if you told people it was about “being honest about good epistemics”, but the result still ended up influencing the book, you’d have something of an asshole filter where some people vote strategically and are disproportionately rewarded.”
I think I may have some conceptual disagreements with your framing, but my current goal for next year is to structure things in a way that separates out truth, usefulness, and broader reputational effects from each other, so that the process is more robust to people coming at it with different goals and frames.
The reason I’m more worried about this for an Alignment Review is that the stakes are higher, and it is not only important that the process be epistemically sound, but for everyone to believe it’s epistemically sound and/or fair. (And meanwhile sexual abuse isn’t the only possible worrisome thing to come up)
To be clear, given the vote system we went with (which basically rolled all considerations into a single vote, and ask), I don’t think there was anything wrong with voting against Affordance Widths for that reason.
I saw this more as “the system wasn’t well designed, we should use a better system next time.”
(Different LW team members also had different opinions on what exactly the Review should be doing and why, and some changed their mind over the course of the process, which is part of why some of the messaging was mixed).
The reason I thought (at the time) it was best to “just collapse everything into one vote, which is tied fairly closely to ‘what should be in the book?’” was that if you told people it was about “being honest about good epistemics”, but the result still ended up influencing the book, you’d have something of an asshole filter where some people vote strategically and are disproportionately rewarded.”
I think I may have some conceptual disagreements with your framing, but my current goal for next year is to structure things in a way that separates out truth, usefulness, and broader reputational effects from each other, so that the process is more robust to people coming at it with different goals and frames.
The reason I’m more worried about this for an Alignment Review is that the stakes are higher, and it is not only important that the process be epistemically sound, but for everyone to believe it’s epistemically sound and/or fair. (And meanwhile sexual abuse isn’t the only possible worrisome thing to come up)