This means you’re using others’ reactions to define what you are or are not okay with.
I mean, if you think this −1−3 −5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?
And if you think it’s not reflecting something true, doesn’t your reaction highlight a place where your reactions need calibrating?
I’m pretty sure you’re actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.
But this is a place where a collective culture of emotional codependence actively screws with epistemics.
Which is to say, I disagree in a principled way with your sense of “wrongness” here, in the sense you name in your previous comment:
Like, the complaint here is not necessarily “y’all’re doing it Wrong” with a capital W so much as “y’all’re doing it in a way that seems wrong to me, given what I think ‘wrong’ is,” and there might just be genuine disagreement about wrongness.
I think a good truth-tracking culture acknowledges, but doesn’t try to ameliorate, the discomfort you’re naming in the comment I’m replying to.
(Whether LW agrees with me here is another matter entirely! This is just me.)
I mean, if you think this −1−3 −5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?
No, not quite.
There’s a difference (for instance) between knowledge and common knowledge, and there’s a difference (for instance) between animosity and punching.
Or maybe this is what you meant with “actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.”
A bunch of LWers can be individually and independently wrong about matters of fact, and this is different from them creating common knowledge that they all disagree with a thing (wrongly).
It’s better in an important sense for ten individually wrong people to each not have common knowledge that the other nine also are wrong about this thing, because otherwise they come together and form the anti-vax movement.
Similarly, a bunch of LWers can be individually in grumbly disagreement with me, and this is different from there being a flag for the grumbly discontent to come together and form SneerClub.
(It’s worth noting here that there is a mirror to all of this, i.e. there’s the world in which people are quietly right or in which their quiet discontent is, like, a Correct Moral Objection or something. But it is an explicit part of my thesis here that I do not trust LWers en-masse. I think the actual consensus of LWers is usually hideously misguided, and that a lot of LW’s structure (e.g. weighted voting) helps to correct and ameliorate this fact, though not perfectly (e.g. Ben Hoffman’s patently-false slander of me being in positive vote territory for over a week with no one speaking in objection to it, which is a feature of Old LessWrong A Long Time Ago but it nevertheless still looms large in my model because I think New LessWrong Today is more like the post-Civil-War South (i.e. not all that changed) than like post-WWII-Japan (i.e. deeply restructured)).)
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
It is up in the air whether RightnessAndWrongnessAccordingToDuncan is closer to actually right than RightnessAndWrongnessAccordingToTheLWMob.
But it seems to me that the vote button in its current implementation, and evaluated according to the votes coming in, was more likely to be in the non-overlap between those two, and in the LWMob part, which means an asymmetric weapon in the wrong direction.
Sorry, this comment is sort of quickly tossed off; please let me know if it doesn’t make sense.
Mmm. It makes sense. It was a nuance I missed about your intent. Thank you.
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
Abstractly that seems maybe good.
My gut sense is you can’t do that by targeting how coalitions form. That engenders Goodhart drift. You’ve got to do it by making truth easier to notice in some asymmetric way.
I don’t know how to do that.
I agree that this voting system doesn’t address your concern.
It’s unclear to me how big a problem it is though. Maybe it’s huge. I don’t know.
This means you’re using others’ reactions to define what you are or are not okay with.
I mean, if you think this
−1−3−5 is reflecting something true, are you saying you would rather keep that truth hidden so you can keep feeling good about posting in ignorance?And if you think it’s not reflecting something true, doesn’t your reaction highlight a place where your reactions need calibrating?
I’m pretty sure you’re actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.
But this is a place where a collective culture of emotional codependence actively screws with epistemics.
Which is to say, I disagree in a principled way with your sense of “wrongness” here, in the sense you name in your previous comment:
I think a good truth-tracking culture acknowledges, but doesn’t try to ameliorate, the discomfort you’re naming in the comment I’m replying to.
(Whether LW agrees with me here is another matter entirely! This is just me.)
No, not quite.
There’s a difference (for instance) between knowledge and common knowledge, and there’s a difference (for instance) between animosity and punching.
Or maybe this is what you meant with “actually talking about collective incentives and you’re just using yourself as an example to point out the incentive landscape.”
A bunch of LWers can be individually and independently wrong about matters of fact, and this is different from them creating common knowledge that they all disagree with a thing (wrongly).
It’s better in an important sense for ten individually wrong people to each not have common knowledge that the other nine also are wrong about this thing, because otherwise they come together and form the anti-vax movement.
Similarly, a bunch of LWers can be individually in grumbly disagreement with me, and this is different from there being a flag for the grumbly discontent to come together and form SneerClub.
(It’s worth noting here that there is a mirror to all of this, i.e. there’s the world in which people are quietly right or in which their quiet discontent is, like, a Correct Moral Objection or something. But it is an explicit part of my thesis here that I do not trust LWers en-masse. I think the actual consensus of LWers is usually hideously misguided, and that a lot of LW’s structure (e.g. weighted voting) helps to correct and ameliorate this fact, though not perfectly (e.g. Ben Hoffman’s patently-false slander of me being in positive vote territory for over a week with no one speaking in objection to it, which is a feature of Old LessWrong A Long Time Ago but it nevertheless still looms large in my model because I think New LessWrong Today is more like the post-Civil-War South (i.e. not all that changed) than like post-WWII-Japan (i.e. deeply restructured)).)
What I want is for Coalitions of Wrongness to have a harder time forming, and Coalitions of Rightness to have an easier time forming.
It is up in the air whether RightnessAndWrongnessAccordingToDuncan is closer to actually right than RightnessAndWrongnessAccordingToTheLWMob.
But it seems to me that the vote button in its current implementation, and evaluated according to the votes coming in, was more likely to be in the non-overlap between those two, and in the LWMob part, which means an asymmetric weapon in the wrong direction.
Sorry, this comment is sort of quickly tossed off; please let me know if it doesn’t make sense.
Mmm. It makes sense. It was a nuance I missed about your intent. Thank you.
Abstractly that seems maybe good.
My gut sense is you can’t do that by targeting how coalitions form. That engenders Goodhart drift. You’ve got to do it by making truth easier to notice in some asymmetric way.
I don’t know how to do that.
I agree that this voting system doesn’t address your concern.
It’s unclear to me how big a problem it is though. Maybe it’s huge. I don’t know.