I agree with the focus on epistemic standards, and I think many of the points here are good. I disagree that this is the primary reason to focus on maintaining epistemic standards:
Posts like this can hurt hurt the optics of the research done in the LW/AF extended universe. What does a prospective AI x-safety researcher think when they get referred to this site and see this post above several alignment research posts?
I think we want to focus on the epistemic standards of posts so that we ourselves can trust the content on LessWrong to be honestly informing us about the world. In most places you have to watch your back way more than on LessWrong (e.g. Twitter, Reddit, Facebook). I don’t currently value the question “what does this look like to other people” half as much as I care about the question “can I myself trust the content on LessWrong”.
(Though, I admit, visibly having strong truth-seeking norms is a good way to select for the sorts of folks who will supply truth and not falsehood.)
I somewhat agree, athough I obviously put a bit less weight on your reason than you do. Maybe I should update my confidence of the importance of what I wrote to medium-high.
Let me raise the question of continuously rethinking incentives on LW/AF, for both Ben’s reason and my original reason.
The upvote/karma system does not seem like it incentivizes high epistemic standards and top-rigor posts, although I would need more datapoints to make a proper judgement.
Rigor as in meticulously researching everything seems not like the best thing to strive for? For what it’s worth I think the post actually did a good job in framing this post, so I mostly took this as, “this is what this feels like” and less this is what the current fundig situation ~actually~ is. The Karma system of the comments did a great job at surfacing important facts like the hotel price.
I agree with the focus on epistemic standards, and I think many of the points here are good. I disagree that this is the primary reason to focus on maintaining epistemic standards:
I think we want to focus on the epistemic standards of posts so that we ourselves can trust the content on LessWrong to be honestly informing us about the world. In most places you have to watch your back way more than on LessWrong (e.g. Twitter, Reddit, Facebook). I don’t currently value the question “what does this look like to other people” half as much as I care about the question “can I myself trust the content on LessWrong”.
(Though, I admit, visibly having strong truth-seeking norms is a good way to select for the sorts of folks who will supply truth and not falsehood.)
I somewhat agree, athough I obviously put a bit less weight on your reason than you do. Maybe I should update my confidence of the importance of what I wrote to medium-high.
Let me raise the question of continuously rethinking incentives on LW/AF, for both Ben’s reason and my original reason.
The upvote/karma system does not seem like it incentivizes high epistemic standards and top-rigor posts, although I would need more datapoints to make a proper judgement.
Rigor as in meticulously researching everything seems not like the best thing to strive for? For what it’s worth I think the post actually did a good job in framing this post, so I mostly took this as, “this is what this feels like” and less this is what the current fundig situation ~actually~ is. The Karma system of the comments did a great job at surfacing important facts like the hotel price.