Criticism that’s been covered before should be addressed by citing prior discussion and flagging the post as a duplicate unless they can point out some way their phrasing is better. Language models are potentially very capable of making the process of citing dupes much more efficient, and I’m going to talk to AI Objectives about this stuff at some point in the next week and this is one of the technologies we’re planning on discussing.
(less relevant to the site, but general advice: In situations where a bad critic is exhibiting vices that make good faith conversation impossible, it’s not ad hominem to redirect the conversation and focus on that. To have a viable discourse, it is necessary to hold bad critics accountable.)
What are the actual rationality concepts LWers are basically required to understand to participate in most discussions?
Being interested in finding out when you’re wrong about something/promoting good critics is one of the defining norms of rationality, but you have a big problem with this, because the way a user/community communicates interest/promotes posts in a redditlike is by voting, and voting is anonymous, so you have no way of punishing violations of this norm.
I think this is actually a serious problem with the site, and the approach I’d take to fixing it is to make a different kind of site.
Language models are potentially very capable of making the process of citing dupes much more efficient, and I’m going to talk to AI Objectives about this stuff at some point in the next week and this is one of the technologies we’re planning on discussing.
This is a cool suggested use of language models. I’ll think about whether/how to implement it on LW.
Criticism that’s been covered before should be addressed by citing prior discussion and flagging the post as a duplicate unless they can point out some way their phrasing is better.
Language models are potentially very capable of making the process of citing dupes much more efficient, and I’m going to talk to AI Objectives about this stuff at some point in the next week and this is one of the technologies we’re planning on discussing.
(less relevant to the site, but general advice: In situations where a bad critic is exhibiting vices that make good faith conversation impossible, it’s not ad hominem to redirect the conversation and focus on that. To have a viable discourse, it is necessary to hold bad critics accountable.)
Being interested in finding out when you’re wrong about something/promoting good critics is one of the defining norms of rationality, but you have a big problem with this, because the way a user/community communicates interest/promotes posts in a redditlike is by voting, and voting is anonymous, so you have no way of punishing violations of this norm.
I think this is actually a serious problem with the site, and the approach I’d take to fixing it is to make a different kind of site.
This is a cool suggested use of language models. I’ll think about whether/how to implement it on LW.