My sense is that I am disagreeing with (a set of) specific things.
The bulk update that I’m pushing for is not “switch my opinion to everything Duncan says,” but “start looking for ways to make the smaller, each-nameable-in-its-own-right slips in rationality happen less often.”
I don’t think I’m making a meta-argument about disagreement being wrong, except insofar as I’m asserting a belief that LessWrong ought to be for a specific thing, and that, in the case where there is consensus about that thing, other things should be deprioritized. I’m not even claiming that I’m definitely right about the thing LW ought to be for! But if it’s about that thing, or chooses to become so, then it needs to be less about the other thing.
If we had a consensus about “this comment is more rational, and that comment is less rational”, then reminding people to upvote the rational comments and downvote the irrational comments might result in karma scores that everyone would agree with.
(Modulo the fact already mentioned somewhere in this discussion that some comments are seen by more people than other comments, which would still result in more karma for the same degree of rationality.)
(Plus some other issues, such as: what if someone writes a comment containing one rational and one irrational paragraph; should we penalize needlessly long or hard-to-read comments; what if the comment is not quite good but contains a rare and important idea; etc.)
Thing is, I don’t believe we have this consensus. Some comments are obviously rational, some are obviously irrational, but there are many where different people have a different opinion.
Technically, this can be measured. Like, find a person you believe to be so rational that you are satisfied with their level of rationality, who comments and votes on LW. Then find a long thread where you both voted, and check how many comments you upvoted/downvoted/ignored the same, and how many times you disagreed (not just upvote vs downvote, but also e.g. upvote vs no vote). My guess is that you overestimate how much your votes would match.
My understanding of your complaint is that people are often voting on comments regardless of their rationality. Which certainly happens. But in a parallel reality where all of us consistently tried our best to really only vote for good arguments… I think you would assume much greater consensus in votes than I would.
Rationality doesn’t make sense as a property of comments. It’s a quality of cognitive skills that work well (and might generate comments). Any judgement of comments according to rationality of algorithms that generated them is an ad hominem equivocation, the comments screen off the algorithms that generated them.
I think that you’re correct to point at a potential trap that people might slip into, of confusing the qualities of a comment with the properties of the algorithm that generated it. I think this is a thing people do, in fact, do, and it’s a projection, and it’s an often-wrong projection.
But I also think that there’s a straightforward thing that people mean by “this comment is more rational than that one,” and I think it’s a valid use of the word rational in the sense that 70+ out of 100 people would interpret it as meaning what the speaker actually intended.
Something like:
This is more careful with its inferences than that
This is more justified in its conclusions than that
This is more self-aware about the ways in which it might be skewed or off than that
This is more transparent and legible than that
This causes me to have an easier time thinking and seeing clearly than that
… and I think “thinking about how to reliably distinguish between [this] and [that] is a worthwhile activity, and a line of inquiry that’s likely to lead to promising ideas for improving the site and the community.”
I’m specifically boosting the prescriptivist point about not using the word “rational” in an inflationary way that doesn’t make literal sense. Comments can be valid, explicit on their own epistemic status, true, relevant to their intended context, not making well-known mistakes, and so on and so forth, but they can’t be rational, for the reason I gave, in the sense of “rational” as a property of cognitive algorithms.
I think this is a mistake
Incidentally, I like the distinction between error and mistake from linguistics, where an error is systematic or deliberatively endorsed behavior, while a mistake is intermittent behavior that’s not deliberatively endorsed. That would have my comment make an error, not a mistake.
In part, that’s why several of my suggestions depended on a small number of relatively concrete observables (like distinguishing inference from observation).
But also, I think that a substantial driver of the lack of consensus/spread of opinion lies in the fact that the population of LessWrong today, in my best estimation, contains a lot of people who “ought not to be here,” not in the sense that they’re bad or wrong or anything, but in the sense that a gym ought mostly only contain people interested in doing physical activity and a library ought mostly only contain people interested in looking at books. There is some number of non-central or non-bought-in members that a given population can sustain, and right now I think LessWrong is holding more than it can handle.
I think a tighter population would still lack consensus in the way you highlight, but less so.
FWIW, I’m someone who believes myself to have the occasional useful contribution on LW, but I also have an intuitive sense of being “dangerously non-central” here, with the first word of that expanding to something like “likely to be welcomed anyway, but in a way which would do more collateral damage to community alignment (via dilution) than is broadly recognized in a way that people are willing to act on”. I apply a significant amount of secondary self-restraint on those grounds to what I post, possibly not enough (though my thoughts about what an actually appropriate strategy would be to apply here are too muddled to say that with confidence), and my emotional sense endorses my use of this restraint (in particular, it doesn’t cause noticeable feelings of hostility or rejection in either direction).
I’m saying this out loud partly in case anyone else who’s had similar first-person experiences would otherwise feel awkward about describing them here and therefore result in a cluster of evidence being missing; I don’t know how large that group would be.
My sense is that I am disagreeing with (a set of) specific things.
The bulk update that I’m pushing for is not “switch my opinion to everything Duncan says,” but “start looking for ways to make the smaller, each-nameable-in-its-own-right slips in rationality happen less often.”
I don’t think I’m making a meta-argument about disagreement being wrong, except insofar as I’m asserting a belief that LessWrong ought to be for a specific thing, and that, in the case where there is consensus about that thing, other things should be deprioritized. I’m not even claiming that I’m definitely right about the thing LW ought to be for! But if it’s about that thing, or chooses to become so, then it needs to be less about the other thing.
If we had a consensus about “this comment is more rational, and that comment is less rational”, then reminding people to upvote the rational comments and downvote the irrational comments might result in karma scores that everyone would agree with.
(Modulo the fact already mentioned somewhere in this discussion that some comments are seen by more people than other comments, which would still result in more karma for the same degree of rationality.)
(Plus some other issues, such as: what if someone writes a comment containing one rational and one irrational paragraph; should we penalize needlessly long or hard-to-read comments; what if the comment is not quite good but contains a rare and important idea; etc.)
Thing is, I don’t believe we have this consensus. Some comments are obviously rational, some are obviously irrational, but there are many where different people have a different opinion.
Technically, this can be measured. Like, find a person you believe to be so rational that you are satisfied with their level of rationality, who comments and votes on LW. Then find a long thread where you both voted, and check how many comments you upvoted/downvoted/ignored the same, and how many times you disagreed (not just upvote vs downvote, but also e.g. upvote vs no vote). My guess is that you overestimate how much your votes would match.
My understanding of your complaint is that people are often voting on comments regardless of their rationality. Which certainly happens. But in a parallel reality where all of us consistently tried our best to really only vote for good arguments… I think you would assume much greater consensus in votes than I would.
Rationality doesn’t make sense as a property of comments. It’s a quality of cognitive skills that work well (and might generate comments). Any judgement of comments according to rationality of algorithms that generated them is an ad hominem equivocation, the comments screen off the algorithms that generated them.
Mmm, I think this is a mistake.
I think that you’re correct to point at a potential trap that people might slip into, of confusing the qualities of a comment with the properties of the algorithm that generated it. I think this is a thing people do, in fact, do, and it’s a projection, and it’s an often-wrong projection.
But I also think that there’s a straightforward thing that people mean by “this comment is more rational than that one,” and I think it’s a valid use of the word rational in the sense that 70+ out of 100 people would interpret it as meaning what the speaker actually intended.
Something like:
This is more careful with its inferences than that
This is more justified in its conclusions than that
This is more self-aware about the ways in which it might be skewed or off than that
This is more transparent and legible than that
This causes me to have an easier time thinking and seeing clearly than that
… and I think “thinking about how to reliably distinguish between [this] and [that] is a worthwhile activity, and a line of inquiry that’s likely to lead to promising ideas for improving the site and the community.”
I’m specifically boosting the prescriptivist point about not using the word “rational” in an inflationary way that doesn’t make literal sense. Comments can be valid, explicit on their own epistemic status, true, relevant to their intended context, not making well-known mistakes, and so on and so forth, but they can’t be rational, for the reason I gave, in the sense of “rational” as a property of cognitive algorithms.
Incidentally, I like the distinction between error and mistake from linguistics, where an error is systematic or deliberatively endorsed behavior, while a mistake is intermittent behavior that’s not deliberatively endorsed. That would have my comment make an error, not a mistake.
I like it.
I agree that the consensus doesn’t exist.
In part, that’s why several of my suggestions depended on a small number of relatively concrete observables (like distinguishing inference from observation).
But also, I think that a substantial driver of the lack of consensus/spread of opinion lies in the fact that the population of LessWrong today, in my best estimation, contains a lot of people who “ought not to be here,” not in the sense that they’re bad or wrong or anything, but in the sense that a gym ought mostly only contain people interested in doing physical activity and a library ought mostly only contain people interested in looking at books. There is some number of non-central or non-bought-in members that a given population can sustain, and right now I think LessWrong is holding more than it can handle.
I think a tighter population would still lack consensus in the way you highlight, but less so.
FWIW, I’m someone who believes myself to have the occasional useful contribution on LW, but I also have an intuitive sense of being “dangerously non-central” here, with the first word of that expanding to something like “likely to be welcomed anyway, but in a way which would do more collateral damage to community alignment (via dilution) than is broadly recognized in a way that people are willing to act on”. I apply a significant amount of secondary self-restraint on those grounds to what I post, possibly not enough (though my thoughts about what an actually appropriate strategy would be to apply here are too muddled to say that with confidence), and my emotional sense endorses my use of this restraint (in particular, it doesn’t cause noticeable feelings of hostility or rejection in either direction).
I’m saying this out loud partly in case anyone else who’s had similar first-person experiences would otherwise feel awkward about describing them here and therefore result in a cluster of evidence being missing; I don’t know how large that group would be.