We don’t currently have karma and agree/disagree voting broken out separately on posts (like we do on comments). But I’m pretty sure that if we did, your posts would still be heavily downvoted. The problem is not that people disagree with you; “controversial views that many on LW don’t like” isn’t the central issue. The problem is that you’re presenting arguments which have been repeatedly floated for years and which most long-time LWers have long since dismissed as dumb. This is being correctly rate-limited, for exactly the same reasons that the site would downvote (and then rate-limit) e.g. someone who came along arguing that we can’t dismiss the existence of God because epistemic modesty, or someone who came along arguing that Searle’s Chinese room argument proves AI will never be an X-risk, or some similarly stupid (but not-completely-dead-outside-LW) argument.
It wouldn’t have mattered to me whose name was in the title of that post, the strong-downvote button floated nearer to me just from reading the rest of the title.
From reading omnizoid’s blog, he seems overconfident in all his opinions. Even when changing them, the new opinion is always a revelation of truth, relegating his previously confident opinion to the follies of youth, and the person who bumped him into the change is always brilliant.
“It wouldn’t have mattered to me whose name was in the title of that post, the strong-downvote button floated nearer to me just from reading the rest of the title.”
I think this is right from an individual user perspective, but misses part of the dynamic. My impression from reading lesswrong posts is that something like that post, had the topic been different, maybe “Jesus was often incredibly wrong about stuff”, would have been ignored by many people. It would maybe have had between zero and a dozen karma and clearly not been clicked on by many people.
But that post, in some sense, was more successful than ones that are ignored—it managed to get people to read it (which is a necessary first step of communicating anything). That it has evidently failed in the second step (persuading people) is clear from the votes.
In a sense maybe this is the system working as intended: stuff that people just ignore doesn’t need downvoting because it doesn’t waste much communication bandwidth. Where as stuff that catches attention then disapoints is where the algorithm can maybe do people a favour with downvote data. But the way that system feeds into the users posting rights seems a little weird.
There are plenty of people on LessWrong who are overconfident in all their opionions (or maybe write as if they are, as a misguided rhetorical choice?). It is probably a selection effect of people who appreciate the sequences—whatever you think of his accuracy record, EY definitely writes as if he’s always very confident in his conclusions.
Whatever the reason, (rhetorical) overconfidence is most often seen here as a venial sin, as long as you bring decently-reasoned arguments and are willing to change your mind in response to other’s. Maybe it’s not your case, but I’m sure many would have been lighter with their downvotes had the topic been another one—just a few people strong downvoting instead of simple downvoting can change the karma balance quite a bit
Yeah, I agree I have lots of views that LessWrongers find dumb. My claim is just that it’s bad when those views are hard to communicate on account of the way LW is set up.
I think it’s not just the views but also (mostly?) the way you write them.
This is hindsight, but next time instead of writing “I think Eliezer is often wrong about X, Y, Z” perhaps you should first write three independent articles “my opinion on X”, “my opinion on Y”, my opinion on Z”, and then one of two things will happen—if people agree with you on X, Y, Z, then it makes sense to write the article “I think Eliezer is often wrong” and use these three articles as evidence… or if people disagree with you on X, Y, Z, then it doesn’t really make sense to argue to that audience that Eliezer is wrong about that, it they clearly think that he actually is right about X, Y, Z. If you want to win this battle, you must first win the battles about X, Y, Z individually.
(Shortly, don’t argue two controversial things at the same time. Either make the article about X, Y, Z, or about Eliezer’s overconfidence and fallibility. An argument “Eliezer is wrong because he says things you agree with” will not get a lot of support.)
Alternatively, it can happen that people will disagree with you on X, Y, but agree on Z. In that case you can still make an argument for “Eliezer is sometimes wrong” and only use the discussion on Z as an example.
As shminux describes well, it’s possible to write about controversial views in a way that doesn’t get downvoted into nirvana. To do that, you actually have to think about how to write well.
The rate limits, limits the quantity but that allows you to spend more time to get the quality right. If you are writing in the style you are writing you aren’t efficiently communicating in the first place. That would require to think a lot more about what the cruxes actually are.
I don’t think people recognize when they’re in an echo chamber. You can imagine a Trump website downvoting all of the Biden followers and coming up with some ridiculous logic like, “And into the garden walks a fool.”
The current system was designed to silence the critics of Yudkowski’s et al’s worldview as it relates to the end of the world. Rather than fully censor critics (probably their actual goal) they have to at least feign objectivity and wait until someone walks into the echo chamber garden and then banish them as “fools”.
As someone with significant understanding of ML who previously disagreed with yudkowsky but have come to partially agree with him on specific points recently due to studying which formalisms apply to empirical results when, and who may be contributing to downvoting of people who have what I feel are bad takes, some thoughts about the pattern of when I downvote/when others downvote:
yeah, my understanding of social network dynamics does imply people often don’t notice echo chambers. agree.
politics example is a great demonstration of this.
But I think in both the politics example and lesswrong’s case, the system doesn’t get explicitly designed for that end, in the sense of people bringing it into a written verbal goal and then doing coherent reasoning to achieve it; instead, it’s an unexamined pressure. in fact, lesswrong is explicit-reasoning-level intended to be welcoming to people who strongly disagree and can be precise and step-by-step about why. However,
I do feel that there’s an unexamined pressure reducing the degree to which tutorial writing is created and indexed to show new folks exactly how to communicate a claim in a way lesswrong community voting standards find upvoteworthy-despite-disagreeworthy. Because there is an explicit intention to not fall to this implicit pressure, I suspect we’re doing better here than many other places that have implicit pressure to bubble up, but of course having lots of people with similar opinions voting will create an implicit bubble pressure.
I don’t think the adversarial agency you’re imagining is quite how the failure works in full detail, but because it implicitly serves to implement a somewhat similar outcome, then in adversarial politics mode, I can see how that wouldn’t seem to matter much. Compare peer review in science: it has extremely high standards, and does serve to make science tend towards an echo chamber somewhat, but because it is fairly precisely specified what it takes to get through peer review with a claim everyone finds shocking—it takes a well argued, precisely evidenced case—it is expected that peer review serves as a filter that preserves scientific quality. (though it is quite ambiguous whether that’s actually true, so you might be able to make the same arguments about peer review! perhaps the only way science actually advances a shared understanding is enough time passing that people can build on what works and the attempts that don’t work can be shown to be promising-looking-but-actually-useless; in which case peer review isn’t actually helping at all. but I do personally think step-by-step validity of argumentation is in fact a big deal for determining whether your claim will stand the test of time ahead of time.)
We don’t currently have karma and agree/disagree voting broken out separately on posts (like we do on comments). But I’m pretty sure that if we did, your posts would still be heavily downvoted. The problem is not that people disagree with you; “controversial views that many on LW don’t like” isn’t the central issue. The problem is that you’re presenting arguments which have been repeatedly floated for years and which most long-time LWers have long since dismissed as dumb. This is being correctly rate-limited, for exactly the same reasons that the site would downvote (and then rate-limit) e.g. someone who came along arguing that we can’t dismiss the existence of God because epistemic modesty, or someone who came along arguing that Searle’s Chinese room argument proves AI will never be an X-risk, or some similarly stupid (but not-completely-dead-outside-LW) argument.
Oh, come on, it’s clear that the Yudkowsky post was downvoted because it was bashing Yudkowsky and not because the arguments were dismissed as “dumb.”
It wouldn’t have mattered to me whose name was in the title of that post, the strong-downvote button floated nearer to me just from reading the rest of the title.
From reading omnizoid’s blog, he seems overconfident in all his opinions. Even when changing them, the new opinion is always a revelation of truth, relegating his previously confident opinion to the follies of youth, and the person who bumped him into the change is always brilliant.
“It wouldn’t have mattered to me whose name was in the title of that post, the strong-downvote button floated nearer to me just from reading the rest of the title.”
I think this is right from an individual user perspective, but misses part of the dynamic. My impression from reading lesswrong posts is that something like that post, had the topic been different, maybe “Jesus was often incredibly wrong about stuff”, would have been ignored by many people. It would maybe have had between zero and a dozen karma and clearly not been clicked on by many people.
But that post, in some sense, was more successful than ones that are ignored—it managed to get people to read it (which is a necessary first step of communicating anything). That it has evidently failed in the second step (persuading people) is clear from the votes.
In a sense maybe this is the system working as intended: stuff that people just ignore doesn’t need downvoting because it doesn’t waste much communication bandwidth. Where as stuff that catches attention then disapoints is where the algorithm can maybe do people a favour with downvote data. But the way that system feeds into the users posting rights seems a little weird.
There are plenty of people on LessWrong who are overconfident in all their opionions (or maybe write as if they are, as a misguided rhetorical choice?). It is probably a selection effect of people who appreciate the sequences—whatever you think of his accuracy record, EY definitely writes as if he’s always very confident in his conclusions.
Whatever the reason, (rhetorical) overconfidence is most often seen here as a venial sin, as long as you bring decently-reasoned arguments and are willing to change your mind in response to other’s. Maybe it’s not your case, but I’m sure many would have been lighter with their downvotes had the topic been another one—just a few people strong downvoting instead of simple downvoting can change the karma balance quite a bit
Yeah, I agree I have lots of views that LessWrongers find dumb. My claim is just that it’s bad when those views are hard to communicate on account of the way LW is set up.
I think it’s not just the views but also (mostly?) the way you write them.
This is hindsight, but next time instead of writing “I think Eliezer is often wrong about X, Y, Z” perhaps you should first write three independent articles “my opinion on X”, “my opinion on Y”, my opinion on Z”, and then one of two things will happen—if people agree with you on X, Y, Z, then it makes sense to write the article “I think Eliezer is often wrong” and use these three articles as evidence… or if people disagree with you on X, Y, Z, then it doesn’t really make sense to argue to that audience that Eliezer is wrong about that, it they clearly think that he actually is right about X, Y, Z. If you want to win this battle, you must first win the battles about X, Y, Z individually.
(Shortly, don’t argue two controversial things at the same time. Either make the article about X, Y, Z, or about Eliezer’s overconfidence and fallibility. An argument “Eliezer is wrong because he says things you agree with” will not get a lot of support.)
Alternatively, it can happen that people will disagree with you on X, Y, but agree on Z. In that case you can still make an argument for “Eliezer is sometimes wrong” and only use the discussion on Z as an example.
As shminux describes well, it’s possible to write about controversial views in a way that doesn’t get downvoted into nirvana. To do that, you actually have to think about how to write well.
The rate limits, limits the quantity but that allows you to spend more time to get the quality right. If you are writing in the style you are writing you aren’t efficiently communicating in the first place. That would require to think a lot more about what the cruxes actually are.
I don’t think people recognize when they’re in an echo chamber. You can imagine a Trump website downvoting all of the Biden followers and coming up with some ridiculous logic like, “And into the garden walks a fool.”
The current system was designed to silence the critics of Yudkowski’s et al’s worldview as it relates to the end of the world. Rather than fully censor critics (probably their actual goal) they have to at least feign objectivity and wait until someone walks into the echo chamber garden and then banish them as “fools”.
As someone with significant understanding of ML who previously disagreed with yudkowsky but have come to partially agree with him on specific points recently due to studying which formalisms apply to empirical results when, and who may be contributing to downvoting of people who have what I feel are bad takes, some thoughts about the pattern of when I downvote/when others downvote:
yeah, my understanding of social network dynamics does imply people often don’t notice echo chambers. agree.
politics example is a great demonstration of this.
But I think in both the politics example and lesswrong’s case, the system doesn’t get explicitly designed for that end, in the sense of people bringing it into a written verbal goal and then doing coherent reasoning to achieve it; instead, it’s an unexamined pressure. in fact, lesswrong is explicit-reasoning-level intended to be welcoming to people who strongly disagree and can be precise and step-by-step about why. However,
I do feel that there’s an unexamined pressure reducing the degree to which tutorial writing is created and indexed to show new folks exactly how to communicate a claim in a way lesswrong community voting standards find upvoteworthy-despite-disagreeworthy. Because there is an explicit intention to not fall to this implicit pressure, I suspect we’re doing better here than many other places that have implicit pressure to bubble up, but of course having lots of people with similar opinions voting will create an implicit bubble pressure.
I don’t think the adversarial agency you’re imagining is quite how the failure works in full detail, but because it implicitly serves to implement a somewhat similar outcome, then in adversarial politics mode, I can see how that wouldn’t seem to matter much. Compare peer review in science: it has extremely high standards, and does serve to make science tend towards an echo chamber somewhat, but because it is fairly precisely specified what it takes to get through peer review with a claim everyone finds shocking—it takes a well argued, precisely evidenced case—it is expected that peer review serves as a filter that preserves scientific quality. (though it is quite ambiguous whether that’s actually true, so you might be able to make the same arguments about peer review! perhaps the only way science actually advances a shared understanding is enough time passing that people can build on what works and the attempts that don’t work can be shown to be promising-looking-but-actually-useless; in which case peer review isn’t actually helping at all. but I do personally think step-by-step validity of argumentation is in fact a big deal for determining whether your claim will stand the test of time ahead of time.)