I don’t think people recognize when they’re in an echo chamber. You can imagine a Trump website downvoting all of the Biden followers and coming up with some ridiculous logic like, “And into the garden walks a fool.”
The current system was designed to silence the critics of Yudkowski’s et al’s worldview as it relates to the end of the world. Rather than fully censor critics (probably their actual goal) they have to at least feign objectivity and wait until someone walks into the echo chamber garden and then banish them as “fools”.
As someone with significant understanding of ML who previously disagreed with yudkowsky but have come to partially agree with him on specific points recently due to studying which formalisms apply to empirical results when, and who may be contributing to downvoting of people who have what I feel are bad takes, some thoughts about the pattern of when I downvote/when others downvote:
yeah, my understanding of social network dynamics does imply people often don’t notice echo chambers. agree.
politics example is a great demonstration of this.
But I think in both the politics example and lesswrong’s case, the system doesn’t get explicitly designed for that end, in the sense of people bringing it into a written verbal goal and then doing coherent reasoning to achieve it; instead, it’s an unexamined pressure. in fact, lesswrong is explicit-reasoning-level intended to be welcoming to people who strongly disagree and can be precise and step-by-step about why. However,
I do feel that there’s an unexamined pressure reducing the degree to which tutorial writing is created and indexed to show new folks exactly how to communicate a claim in a way lesswrong community voting standards find upvoteworthy-despite-disagreeworthy. Because there is an explicit intention to not fall to this implicit pressure, I suspect we’re doing better here than many other places that have implicit pressure to bubble up, but of course having lots of people with similar opinions voting will create an implicit bubble pressure.
I don’t think the adversarial agency you’re imagining is quite how the failure works in full detail, but because it implicitly serves to implement a somewhat similar outcome, then in adversarial politics mode, I can see how that wouldn’t seem to matter much. Compare peer review in science: it has extremely high standards, and does serve to make science tend towards an echo chamber somewhat, but because it is fairly precisely specified what it takes to get through peer review with a claim everyone finds shocking—it takes a well argued, precisely evidenced case—it is expected that peer review serves as a filter that preserves scientific quality. (though it is quite ambiguous whether that’s actually true, so you might be able to make the same arguments about peer review! perhaps the only way science actually advances a shared understanding is enough time passing that people can build on what works and the attempts that don’t work can be shown to be promising-looking-but-actually-useless; in which case peer review isn’t actually helping at all. but I do personally think step-by-step validity of argumentation is in fact a big deal for determining whether your claim will stand the test of time ahead of time.)
I don’t think people recognize when they’re in an echo chamber. You can imagine a Trump website downvoting all of the Biden followers and coming up with some ridiculous logic like, “And into the garden walks a fool.”
The current system was designed to silence the critics of Yudkowski’s et al’s worldview as it relates to the end of the world. Rather than fully censor critics (probably their actual goal) they have to at least feign objectivity and wait until someone walks into the echo chamber garden and then banish them as “fools”.
As someone with significant understanding of ML who previously disagreed with yudkowsky but have come to partially agree with him on specific points recently due to studying which formalisms apply to empirical results when, and who may be contributing to downvoting of people who have what I feel are bad takes, some thoughts about the pattern of when I downvote/when others downvote:
yeah, my understanding of social network dynamics does imply people often don’t notice echo chambers. agree.
politics example is a great demonstration of this.
But I think in both the politics example and lesswrong’s case, the system doesn’t get explicitly designed for that end, in the sense of people bringing it into a written verbal goal and then doing coherent reasoning to achieve it; instead, it’s an unexamined pressure. in fact, lesswrong is explicit-reasoning-level intended to be welcoming to people who strongly disagree and can be precise and step-by-step about why. However,
I do feel that there’s an unexamined pressure reducing the degree to which tutorial writing is created and indexed to show new folks exactly how to communicate a claim in a way lesswrong community voting standards find upvoteworthy-despite-disagreeworthy. Because there is an explicit intention to not fall to this implicit pressure, I suspect we’re doing better here than many other places that have implicit pressure to bubble up, but of course having lots of people with similar opinions voting will create an implicit bubble pressure.
I don’t think the adversarial agency you’re imagining is quite how the failure works in full detail, but because it implicitly serves to implement a somewhat similar outcome, then in adversarial politics mode, I can see how that wouldn’t seem to matter much. Compare peer review in science: it has extremely high standards, and does serve to make science tend towards an echo chamber somewhat, but because it is fairly precisely specified what it takes to get through peer review with a claim everyone finds shocking—it takes a well argued, precisely evidenced case—it is expected that peer review serves as a filter that preserves scientific quality. (though it is quite ambiguous whether that’s actually true, so you might be able to make the same arguments about peer review! perhaps the only way science actually advances a shared understanding is enough time passing that people can build on what works and the attempts that don’t work can be shown to be promising-looking-but-actually-useless; in which case peer review isn’t actually helping at all. but I do personally think step-by-step validity of argumentation is in fact a big deal for determining whether your claim will stand the test of time ahead of time.)