Probability theory isn’t going to tell you how polite to be. It just isn’t. Why would it? How could it?
I agree with this particular statement, but there are two nearby statements that also seem true and important:
Probability theory absolutely informs what sorts of communication styles are going to identity useful truths most efficiently. For example, you should be more likely to make utterances like “this updates my probability that X will happen” (rather than “X will happen” or “X will not happen” in a more boolean true/false paradigm, for example)
Human psychology and cognitive science (as well as the general study of minds-in-general) absolutely inform the specific question of “what sort of politeness norms are useful for conversations optimized for truth-tracking”. There might be multiple types of conversations that optimize for different truth-tracking strategies. Debate vs collaborative brainstorming vs doublecrux might accomplish slightly different things and benefit from different norms. Crockers rules might create locally more truth-tracking in some situations but also make an environment less likely to include people subconsciously maneuvering such that they won’t have deal with painful stimuli. There is some fact-of-the-matter about what sort of human cultures find out the most interesting and important things most quickly.
I argued a bunch with your post at the time and generally don’t think it was engaging with the question I’m considering here. I complained at the time about you substituting a word definition without acknowledging it, which I think you’re doing again here.
I complained at the time about you substituting a word definition without acknowledging it, which I think you’re doing again here.
Bloom specifically used the phrase “Platonic ideal Art of Discourse”! When someone talks about Platonic ideals of discourse, I think it’s a pretty reasonable reading on my part to infer that they’re talking about simple principles of ideal reasoning with wide interpersonal appeal, like the laws of probability theory, or “Clarifying questions aren’t attacks”, or “A debate in which one side gets unlimited time, but the other side is only allowed to speak three times, isn’t fair”, or “When you identify a concern as your ‘actual crux’, and someone very clearly addresses it, you should either change your mind or admit that it wasn’t a crux”—not there merely being a fact of the matter as to what option is best with respect to some in-principle-specifiable utility function when faced with a complicated, messy empirical policy trade-off (as when deciding when and how to use multiple types of conversations that optimize for different truth-tracking strategies), which is trivial.
If Bloom meant something else by “Platonic ideal Art of Discourse”, he’s welcome to clarify. I charitably assumed the intended meaning was something more substantive than, “The mods are trying to do what we personally think is good, and whatever we personally think is good is a Platonic ideal”, which is vacuous.
That is, from your own statements, it sure looks like your rationale for restricting Achmiz’s speech is not about him violating any principles of ideal discourse that you can clearly describe (and could therefore make neutrally-enforced rules about, and have an ontology such that some complaints are invalid) but rather that some people happen to dislike Achmiz’s writing style, and you’re worried about those people not using your website. (I’m not confident you’ll agree with that characterization, but it seems accurate to me; if you think I’m misreading the situation, you’re welcome to explain why.)
As amusing as it would be to see you try, I should hope you’re not going to seriously defend “But then fewer people would use our website” as a Platonic ideal of good discourse?!
I’ve heard that some people on this website don’t like Holocaust allusions, but frankly, you’re acting like the property owner of a gated community trying to court wealthy but anti-Semetic potential tenants by imposing restrictions on existing Jewish tenants. You’re sensitive to the fact that this plan has costs, and you’re willing to consider mitigating those costs by probably building something that lets people Opt Into More Jews, but you’re not willing to consider that the complaints of the rich people you’re trying to attract are invalid on account of the Jewish tenants not doing anything legibly bad (that you could make a neutrally-enforced rule against), because you have a different ontology here.
If you object to this analogy, I think you should be able explain what, specifically, you think the relevant differences are between people who don’t want to share a gated compound with Jews (despite the fact that they’re free to not invite Jews to dinner parties at their own condos), and people who don’t want share a website with Said Achmiz (despite the fact that they’re free to ban Achmiz from commenting on their own posts). I think it’s a great analogy—right down to the detail of Jews being famous for asking annoying questions.
You mean all that stuff that famously fails to replicate on a regular basis and huge swaths of which have turned out to be basically nonsense…?
the general study of minds-in-general
I don’t think I know what this is. Are you talking about animal psychology, or formal logic (and similarly mathematical fields like probability theory), or what…?
There is some fact-of-the-matter about what sort of human cultures find out the most interesting and important things most quickly.
No doubt there is, but I would like to see something more than just a casual assumption that we have any useful amount of “scientific” or otherwise rigorous knowledge (as opposed to, e.g., “narrative” knowledge, or knowledge that consists of heuristics derived from experience) about this.
I don’t think I know what this is. Are you talking about animal psychology, or formal logic (and similarly mathematical fields like probability theory), or what…?
Some examples I have in mind here are game theory, information theory, and algorithm design. I think the thing on my mind when I wrote the sentence was How An Algorithm Feels From Inside, which touches on different ways you might structure a network that would have different implications on the algorithm’s efficiency and what errors it might make as a side effect.
To be clear, I don’t currently think I have beliefs about moderation that are strongly downstream of those fields. It’s more that I think it’s useful, on a forum that is in-large-part about the intersection of these (and similar) fields, it’s nice to step between my practical best guesses of how what practical tools to apply, and what underlying laws might govern things, even if the laws I know of don’t directly apply to the situation.
Game theory is the bit that I feel like I’ve looked into the most myself and grokked, with Most Prisoner’s Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems being an example I found particularly crisp and illuminating, and Elinor Ostrom’s Governance of the Commons being useful for digging into the details of messy human examples, and giving me a sense of what it’d mean to actually translate them into a formalization.
I agree with this particular statement, but there are two nearby statements that also seem true and important:
Probability theory absolutely informs what sorts of communication styles are going to identity useful truths most efficiently. For example, you should be more likely to make utterances like “this updates my probability that X will happen” (rather than “X will happen” or “X will not happen” in a more boolean true/false paradigm, for example)
Human psychology and cognitive science (as well as the general study of minds-in-general) absolutely inform the specific question of “what sort of politeness norms are useful for conversations optimized for truth-tracking”. There might be multiple types of conversations that optimize for different truth-tracking strategies. Debate vs collaborative brainstorming vs doublecrux might accomplish slightly different things and benefit from different norms. Crockers rules might create locally more truth-tracking in some situations but also make an environment less likely to include people subconsciously maneuvering such that they won’t have deal with painful stimuli. There is some fact-of-the-matter about what sort of human cultures find out the most interesting and important things most quickly.
I argued a bunch with your post at the time and generally don’t think it was engaging with the question I’m considering here. I complained at the time about you substituting a word definition without acknowledging it, which I think you’re doing again here.
Bloom specifically used the phrase “Platonic ideal Art of Discourse”! When someone talks about Platonic ideals of discourse, I think it’s a pretty reasonable reading on my part to infer that they’re talking about simple principles of ideal reasoning with wide interpersonal appeal, like the laws of probability theory, or “Clarifying questions aren’t attacks”, or “A debate in which one side gets unlimited time, but the other side is only allowed to speak three times, isn’t fair”, or “When you identify a concern as your ‘actual crux’, and someone very clearly addresses it, you should either change your mind or admit that it wasn’t a crux”—not there merely being a fact of the matter as to what option is best with respect to some in-principle-specifiable utility function when faced with a complicated, messy empirical policy trade-off (as when deciding when and how to use multiple types of conversations that optimize for different truth-tracking strategies), which is trivial.
If Bloom meant something else by “Platonic ideal Art of Discourse”, he’s welcome to clarify. I charitably assumed the intended meaning was something more substantive than, “The mods are trying to do what we personally think is good, and whatever we personally think is good is a Platonic ideal”, which is vacuous.
This is germane because when I look at recent moderator actions, the claim that the mod team is trying to be accountable to simple principles of ideal reasoning with wide interpersonal appeal is farcical. You specifically listed limiting Said Achmiz’s speech as a prerequisite “next step” for courting potential users to the site. When asked whether the handful of user complaints against Achmiz were valid, you replied that you had “a different ontology here”.
That is, from your own statements, it sure looks like your rationale for restricting Achmiz’s speech is not about him violating any principles of ideal discourse that you can clearly describe (and could therefore make neutrally-enforced rules about, and have an ontology such that some complaints are invalid) but rather that some people happen to dislike Achmiz’s writing style, and you’re worried about those people not using your website. (I’m not confident you’ll agree with that characterization, but it seems accurate to me; if you think I’m misreading the situation, you’re welcome to explain why.)
As amusing as it would be to see you try, I should hope you’re not going to seriously defend “But then fewer people would use our website” as a Platonic ideal of good discourse?!
(I would have hoped that I wouldn’t need to explain this, but to be clear, the problem with “But then fewer people would use our website” as moderation policy is that it systematically sides with popularity over correctness—deciding arguments based on the relative social power of their proponents and detractors, rather than the intellectual merits.)
I’ve heard that some people on this website don’t like Holocaust allusions, but frankly, you’re acting like the property owner of a gated community trying to court wealthy but anti-Semetic potential tenants by imposing restrictions on existing Jewish tenants. You’re sensitive to the fact that this plan has costs, and you’re willing to consider mitigating those costs by probably building something that lets people Opt Into More Jews, but you’re not willing to consider that the complaints of the rich people you’re trying to attract are invalid on account of the Jewish tenants not doing anything legibly bad (that you could make a neutrally-enforced rule against), because you have a different ontology here.
If you object to this analogy, I think you should be able explain what, specifically, you think the relevant differences are between people who don’t want to share a gated compound with Jews (despite the fact that they’re free to not invite Jews to dinner parties at their own condos), and people who don’t want share a website with Said Achmiz (despite the fact that they’re free to ban Achmiz from commenting on their own posts). I think it’s a great analogy—right down to the detail of Jews being famous for asking annoying questions.
You mean all that stuff that famously fails to replicate on a regular basis and huge swaths of which have turned out to be basically nonsense…?
I don’t think I know what this is. Are you talking about animal psychology, or formal logic (and similarly mathematical fields like probability theory), or what…?
No doubt there is, but I would like to see something more than just a casual assumption that we have any useful amount of “scientific” or otherwise rigorous knowledge (as opposed to, e.g., “narrative” knowledge, or knowledge that consists of heuristics derived from experience) about this.
Some examples I have in mind here are game theory, information theory, and algorithm design. I think the thing on my mind when I wrote the sentence was How An Algorithm Feels From Inside, which touches on different ways you might structure a network that would have different implications on the algorithm’s efficiency and what errors it might make as a side effect.
To be clear, I don’t currently think I have beliefs about moderation that are strongly downstream of those fields. It’s more that I think it’s useful, on a forum that is in-large-part about the intersection of these (and similar) fields, it’s nice to step between my practical best guesses of how what practical tools to apply, and what underlying laws might govern things, even if the laws I know of don’t directly apply to the situation.
Game theory is the bit that I feel like I’ve looked into the most myself and grokked, with Most Prisoner’s Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems being an example I found particularly crisp and illuminating, and Elinor Ostrom’s Governance of the Commons being useful for digging into the details of messy human examples, and giving me a sense of what it’d mean to actually translate them into a formalization.
FYI I include a lot of Zack’s philosophy-of-language-translated-into-abstracted-python useful here, and I’d also include your Selective, Corrective, Structural: Three Ways of Making Social Systems Work an example of something I’d expect to still hold up in some alien civilizations.