I’m not sure what your hobby horse is, but I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things. The problem with this is that if you actually care about the world, you can’t take arguments in isolation, but have to consider the context in which they are made.
1. It can be perfectly OK for the environment to bring up a topic once, but can make people less likely to want to visit the forum if someone brings it up all the time and tries to twist other people’s posts towards a discussion of their thing. It would be perfectly alright for moderators who didn’t want to drive away their visitors to ask this person to stop.
2. It can be perfectly OK to kick out someone who has a bad reputation that makes important posters unable to post on your website because they don’t want to associate with that person, even IF that person has good behavior.
3. It can be perfectly OK to downvote posts that are well-reasoned, on topic, and not misleading, because you’re worried about the incentives of those posts being highly upvoted.
All of these things are tradeoffs with decoupled conversation obviously, which has its’ own benefits. The website has to decide what values it stands for and will fight for, vs. what it will be flexible on depending on context. What I don’t think is OK is just to ignore context and assume that decoupling is always unambiguously the right call.
I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things.
Zack didn’t say this. What he said was:
Like, maybe statistics is part of the common interest of many causes, such that, as a matter of local validity, you should assess arguments about statistics on their own merits in the context that those arguments are presented, without worrying about how those arguments might or might not be applied in other contexts?
Which is compatible with thinking more details should be taken into account when the statistical arguments are applied in other contexts (in fact, I’m pretty sure this is what Zack thinks).
Discussion of abstract epistemology principles, which generalize across different contexts, is perhaps most of the point of this website...
Your points 1,2,3 have nothing to do with the epistemic problem of decoupling vs contextualizing, they have to do with political tradeoffs in moderating a forum; they apply to people doing contextualization in their analysis, too. I hate that the phrase “contextualizing norms” is being used to conflate between “all sufficiently relevant information should be used” and “everything should be about politics”.
Your points 1,2,3 have noting to do with the epistemic problem of decoupling vs contextualizing,
This is probably because I don’t know what the epistemic problem is. I only know about the linked post, which defines things like this:
Decoupling norms: It is considered eminently reasonable to require your claims to be considered in isolation—free of any context or potential implications. Attempts to raise these issues are often seen as sloppy thinking or attempts to deflect.
Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy or even an intentional evasion.
… To a contextualiser, decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a decoupler the contextualiser’s insistence that this isn’t possible looks like naked bias and an inability to think straight”
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
There might be another use of decoupling and contextualizing that applies to an epistemic problem, but if so it’s not defined in the canonical article on the site.
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
The proper words for that aren’t decoupling vs contextualizing, it’s denotative vs enactive language. An orthogonal axis to how many relevant contextual factors are supposed to be taken into account. You can require lots of contextual factors to be taken into account in epistemic analysis, or require certain enactments to be made independent of context.
Note, the original post makes the conflation I’m complaining about here too!
It might just make more sense to give this one up to word inflation and come up with new words. I’ll happily use the denotative vs. enactive language to point to this thing in the future, but I’ll probably have to put a footnote that says something like (what most people in the community refer to as decoupling vs. contextualizing.
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It might be that we just have different definitions of absurd and you’re not missing anything, or it could be that you’re taking an extreme version of what I’m saying.
To wit, my stance is that to ignore the consequences of what you say is just obviously wrong. Even if you hold truth as a very high value, you have to value it insanely more than any other value to never encounter a situation where you’re not compromising other things you value by ignoring the difference you could make by not saying something/lying/being careful about how to phrase things, etc.
Now obviously, you also have to consider the effect this type of thinking/communication has on discourse and the public ability to seek the truth—and once you’ve done that you’re ALREADY thinking about the consequences of what you say and what you allow others to say, and the task at that point is to simply weigh them against each other.
It’s important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren’t maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It’s still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
To summarize: you’re destroying the substrate. Stop it.
It’s important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren’t maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
I agree it’s important to realize that these things are fundamentally different.
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It’s still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
A better norm of mathematics might be to NOT publish proofs that have obvious negative consequences like enabling weapons of mass destruction, and have a norm that actively disincentivizes people who publish that sort of research.
In other words, a norm might be to basically be epistemically pure, UNLESS the local instrumental considerations outweigh the cost to epistemic climate. This can be rounded down to “have norms about epistemics and break them sometimes,” but only if when someone points at edge cases where the norms are actively harmful, they’re challenged that sometimes the breaking of those norms is perfectly OK.
IE, if someone is using the norms of the community as a weapon, it’s important to point at that the norms are a means to an end, and that the community won’t blindly allow itself to be taken advantage of.
I think my actual concern with this line of argumentation is: if you have a norm of “If ‘X’ and ‘X implies Y’ then ‘Y’, EXCEPT when it’s net bad to have concluded ‘Y’”, then the werewolves win.
The question of whether it’s net bad to have concluded ‘Y’, is much, much more complicated than the question of whether, logically, ‘Y’ is true under these assumptions (of course, it is). There are many, many more opportunities for werewolves to gum up the works of this process, making the calculation come out wrong.
If we’re having a discussion about X and Y, someone moves to propose ‘Y’ (because, as it has already been agreed, ‘X’ and ‘X implies Y’), and then someone else says “no, we can’t do that, that has negative consequences!”, that second person is probably playing a werewolf strategy, gumming up the works of the epistemic substrate.
If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding ‘Y’ to the discourse, in fact, has bad consequences. And, to get the right answer, that discussion itself is going to have to be up to high epistemic standards. To be trustworthy, it’s going to have to make logical inferences much more complex than “if ‘X’ and ‘X implies Y’, then ‘Y’”. What if someone objects to those logical inference steps, on the basis that they would have negative consequences? Where does that discussion happen?
In practice, these questions aren’t actually answered. In practice, what happens is that social epistemology doesn’t happen, and instead everything becomes about coalitional politics. Saying ‘Y’ doesn’t mean ‘Y is literally true’, it means you’re part of the coalition of people who wants consequences related to (but not even necessarily directly implied by!) the statement ‘Y’ to be put into effect, and that makes you blameworthy if those consequences hurt someone sympathetic, or that coalition is bad. Under such conditions, it is a major challenge to re-establish epistemic discourse, because everything is about violence, including attempts to talk about the “we don’t have epistemology and everything is about violence” problem.
We have something approaching epistemic discourse here on LessWrong, but we have to defend it, or it, too, becomes all about coalitional politics.
If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding ‘Y’ to the discourse, in fact, has bad consequences.
I want to note that LW definitely has exceptions to this norm, if only because of the boring, normal exceptions. (If we would get in trouble with law enforcement for hosting something you might put on LW, don’t put it on LW.) We’ve had in the works (for quite some time) a post explaining our position on less boring cases more clearly, but it runs into difficulty with the sort of issues that you discuss here; generally these questions are answered in private in a way that connects to the judgment calls being made and the particulars of the case, as opposed to through transparent principles that can be clearly understood and predicted in advance (in part because, to extend the analogy, this empowers the werewolves as well).
Another common werewolf move is to take advantage of strong norms like epistemic honesty, and use them to drive wedges in a community or push their agenda, while knowing they can’t be called out because doing so would be akin to attacking the community’s norms.
I’ve seen the meme elsewhere in the rationality community that strong and rigid epistemic norms are a good sociopath repellent, and it’s ALMOST right. The truth is that competent sociopaths (in the Venkat Rao sense) are actually great at using rigid norms for their own ends, and are great at using the truth for their own ends as well. The reason it might work well in the rationality community (besides the obvious fact that sociopaths are even better at using lies to their own ends than the truth) is that strong epistemics are very close to what we’re actually fighting for—and remembering and always orienting towards the mission is ACTUALLY an effective first line defense against sociopaths (necessary but not sufficient IMO).
99 times out of a 100, the correct way to remember what we’re fighting for is to push for stronger epistemics above other considerations. I knew that when I made the original post, and I made it knowing I would get pushback for attacking a core value of the community.
However, 1 time out of 100 the correct way to remember what you’re fighting for is to realize that you have to sacrifice a sacred value for the greater good. And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value, it’s important to make clear (mostly to other people in the community) that sacrificing that value is an option.
What specifically do you mean by “werewolf” here & how do you think it relates to the way Jessica was using it? I’m worried that we’re getting close to just redefining it as a generic term for “enemies of the community.”
By werewolf I meant something like “someone who is pretending be working for the community as a member, but is actually working for their own selfish ends”. I thought Jessica was using it in the same way.
That’s not what I meant. I meant specifically someone who is trying to prevent common knowledge from being created (and more generally, to gum up the works of “social decisionmaking based on correct information”), as in the Werewolf party game.
Worth noting: “werewolf” as a jargon term strikes me as something that is inevitably going to get collapsed into “generic bad actor” over time, if it gets used a lot. I’m assuming that you’re thinking of it sort of as in the “preformal” stage, where it doesn’t make sense to over-optimize the terminology. But if you’re going to keep using it I think it’d make sense to come up with a term that’s somewhat more robust against getting interpreted that way.
(random default suggestion: “obfuscator”. Other options I came up with required multiple words to get the point across and ended up too convoluted. There might be a fun shorthand for a type of animal or mythological figure that is a) a predator or parasite, b) relies on making things cloudy. So far I could just come up with “squid” due to ink jets, but it didn’t really have the right connotations)
That is a bit more specific than what I meant. In this case though, the second more broad meaning of “someone who’s trying to gum up the works of social decisionmaking” still works in the context of the comment.
And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value
Um, in context, this sounds to me like you’re arguing that by writing “Where to Draw the Boundaries?” and my secret (“secret”) blog, I’m trying to get people to accept harmful situations? Am I interpreting you correctly? If so, can you explain in detail what specific harm you think is being done?
Sorry, I was trying to be really careful as I was writing of not accusing you specifically of bad intentions, but obviously it’s hard in a conversation like this where you’re jumping between the meta and the object-level.
It’s important to distinguish a couple things.
1. Jessica and I were talking about people with negative intentions in the last two posts. I’m not claiming that you’re one of those people that is deliberately using this type of argument to cause harm.
2. I’m not claiming that it was the writing of those two posts that were harmful in the way we were talking about. I was claiming that the long post you wrote at the top of the thread where you made several analogies about your response, were exactly the sort of gray area situations where, depending on context, the community might decide to sacrifice it’s sacred value. At the same time, you were banking on the fact that it was a sacred value to say “even in this case, we would uphold the sacred value.” This has the same structure as the werewolf move mentioned above, and it was important for me to speak up, even if you’re not a werewolf.
people with negative intentions [...] deliberately
So, it’s actually not clear to me that deliberate negative intentions are particularly important, here or elsewhere? Almost no one thinks of themselves as deliberately causing avoidable harm, and yet avoidable harm gets done, probably by people following incentive gradients that predictably lead towards harm, against truth, &c. all while maintaining a perfectly sincere subjective conscious narrative about how they’re doing God’s work, on the right side of history, toiling for the greater good, doing what needs to be done, maximizing global utility, acting in accordance with the moral law, practicing a virtue which is nameless, &c.
it was important for me to speak up, even if you’re not a werewolf.
Agreed. If I’m causing harm, and you acquire evidence that I’m causing harm, then you should present that evidence in an appropriate venue in order to either persuade me to stop causing harm, or persuade other people to coördinate to stop me from causing harm.
I was claiming that the long post you wrote at the top of the thread where you made several analogies about your response, were exactly the sort of gray area situations where, depending on context, the community might decide to sacrifice it’s sacred value.
So, my current guess (which is only a guess and which I would have strongly disagreed with ten years ago) is that this is a suicidally terrible idea that will literally destroy the world. Sound like an unreflective appeal to sacred values? Well, maybe!—you shouldn’t take my word for this (or anything else) except to the exact extent that you think my word is Bayesian evidence. Unfortunately I’m going to need to defer supporting argumentation to future Less Wrong posts, because mental and financial health requirements force me to focus on my dayjob for at least the next few weeks. (Oh, and group theory.)
So, it’s actually not clear to me that deliberate negative intentions are particularly important, here or elsewhere?
(responding, and don’t expect another response back because you’re busy).
I used to think this, but I’ve since realized that intentions STRONGLY matter. It seems like a system is fractal, the goals of the subparts/subagents get reflected in the goal of the broader system. People with aligned intentions will tend to shift the incentive gradients, as well people with unaligned intentions (of course, this isn’t a one way relationship, the incentive gradients will also shift the intentions).
I deny that your approach ever has an advantage over recognizing that definitions are tools which have no truth values, and then digging into goals or desires.
Thanks, these are some great points on some of the costs of decoupling norms! (As you’ve observed, I’m generally pretty strongly in favor of decoupling norms, but policy debates should not appear one-sided.)
someone brings it up all the time
I would want to distinguish “brings it up all the time” in the sense of “this user posts about this topic when it’s not relevant” (which I agree is bad and warrants moderator action) versus the sense of “this user posts about this topic a lot, and not on other topics” (which I think is generally OK).
If someone is obsessively focused on their narrow special interest—let’s say, algebraic topology—and occasionally comments specifically when they happen to think of an application of algebraic topology to the forum topic, I think that’s fine, because people reading that particular thread get the benefit of a relevant algebraic topology application—even if looking at that user’s posting history leaves one with an unsettling sense of, “Wow, this person is creepily obsessed with their hobbyhorse.”
tries to twist other people’s posts towards a discussion of their thing
I agree that this would be bad, but I think it’s usually possible to distinguish “twist[ing] other people’s posts towards a discussion of their thing” from a genuinely relevant mention of the thing that couldn’t (or shouldn’t) be reasonably expected to derail the discussion?
In the present case, my great-great-grandparent comment notes that the list-of-koans format lends itself to readers contributing their own examples in the comments, and I tried to give two such examples (trying to mimic the æsthetic of the OP by continuing the numbered list and Alice/Bob/Charlie/&c. character name sequence), one of which related the theme of the OP to the main point of one of my recent posts.
In retrospect, maybe I should’ve thought more carefully about how to phrase the proposed example in a way that makes the connection to the OP more explicit/obvious? (Probably-better version: “A meaningful ‘Yes’ answer to the question ‘Is G an H?’ requires a definition of H such that the answer could be ‘No’.”)
It’s true that, while composing the great-great-grandparent, I was kind of hoping that some readers would click through the link and read my earlier post, which I worked really hard on and which I think is filling in a gap in “A Human’s Guide to Words” that I’ve seen people be confused about. But I don’t see how this can reasonably be construed as an attempt to derail the discussion? Like, I ordinarily wouldn’t expect a brief comment of the form “Great post! Here’s a couple more examples that occurred to me, personally” to receive any replies in the median case.
(Although unfortunately, it empirically looks like the discussion did, in fact, get derailed. I feel bad for Scott G. that we’re cluttering up his comment section like this, but I can’t think of anything I wish I had done differently other than wording the great-great grandparent more clearly, as mentioned in the paragraph-before-last. GivenVanessa’s reply, I felt justified in writing my counterreply … and here we are.)
It would be perfectly alright for moderators who didn’t want to drive away their visitors to ask this person to stop.
Agreed, the moderators are God and their will must be obeyed.
kick out someone who has a bad reputation that makes important posters unable to post on your website because they don’t want to associate with that person, even IF that person has good behavior
So, the dynamic you describe here definitely exists, but I actually think it’s a pretty serious problem for our collective sanity: if some truths happen to lie outside of Society’s Overton window, then systematic truthseekers (who want to collect all the truths, not just the majority of them that are safely within the Overton window) will find themselves on the wrong side of Respectability, and if people who care about being Respectable (and thereby having power in Society) can’t even talk to people outside the Overton window (not even agree with—just talk to, using, for example, a website), then that could have negative instrumental consequences in the form of people with power in Society making bad policy decisions on account of having inaccurate beliefs.
I want to write more about this in the future (albeit not on Less Wrong), but in the meantime, maybe see the immortal Scott Alexander’s “Kolmogorov Complicity And The Parable Of Lightning” for an expression of similar concerns:
Some other beliefs will be found to correlate heavily with lightning-heresy. Maybe atheists are more often lightning-heretics; maybe believers in global warming are too. The enemies of these groups will have a new cudgel to beat them with, “If you believers in global warming are so smart and scientific, how come so many of you believe in lightning, huh?” Even the savvy Kolmogorovs within the global warming community will be forced to admit that their theory just seems to attract uniquely crappy people. It won’t be very convincing. Any position correlated with being truth-seeking and intelligent will be always on the retreat, having to forever apologize that so many members of their movement screw up the lightning question so badly.
Regarding “Kolmogorov complicity”, I just want to make clear that I don’t want to censor your opinion on the political question. Such censorship would only serve to justify your notion that “we only refuse to believe X because it’s heresy, while any systematic truthseeker would believe X”, which is something I very much disagree with. I might be interested in discussing the political question if we were allowed to do it. It is the double bind of, not being able to allowed to argue with you on the political quesiton while having to listen to you constantly hinting at it, is what bugging me. Then again, I don’t really have a good solution.
I’ve read Zack’s blog (the one that is not under the name Zack M. Davis), and his hobbyhorse has to do with transgender issues and gender categories. However, even when he is writing directly about the matter on his own blog, I am unclear what he is actually saying about these issues. There is still a certain abstractness and distance from the object level.
It is nearly impossible for a human being to write a correct program just by thinking really hard. And that is a situation where everything is cut and dried, mathematically exact. Mathematicians do fairly well at proving theorems rigorously, but they have an easier task than programmers, for they only have to convince people, not machines. Outside of those domains, abstract argument on its own is nothing more than abstract art, unless it is continually compared with the object level and exposed to modus delens.
And the object level is what we’re all doing this for, or what’s the point?
And the object level is what we’re all doing this for, or what’s the point?
What’s the point of concrete ideas, compared to more abstract ideas? The reasons seem similar, just with different levels of grounding in experience, like with a filter bubble that you can only peer beyond with great difficulty. This situation is an argument against emphasis on the concrete, not for it.
(I think there’s a mixup between “meta” and “abstract” in this subthread. It’s meta that exists for the object level, not abstractions. Abstractions are themselves on object level when you consider them in their own right.)
Abstractions are a central example of things considered on the object level, so I don’t understand them as being in opposition to the object level. They can be in opposition to more concrete ideas, those closer to experience, but not to being considered on object level.
The point is the relationship between the levels of the ladder of abstraction. Outside of mathematics and programming, long arguments at high levels go wrong without being checked against experience. If experience contradicts, so much the worse for the argument.
Unsure of mathematics, but software development goes wrong in exactly the same way—designs and ideas too far removed from the silicon go wildly wrong and don’t match at all what actually gets built. Eventually, the code wins and the arguments lose (or more often, the code fails and everybody loses).
I’m not sure what your hobby horse is, but I do take objection to the assumption in this post that decoupling norms are the obvious and only correct way to deal with things. The problem with this is that if you actually care about the world, you can’t take arguments in isolation, but have to consider the context in which they are made.
1. It can be perfectly OK for the environment to bring up a topic once, but can make people less likely to want to visit the forum if someone brings it up all the time and tries to twist other people’s posts towards a discussion of their thing. It would be perfectly alright for moderators who didn’t want to drive away their visitors to ask this person to stop.
2. It can be perfectly OK to kick out someone who has a bad reputation that makes important posters unable to post on your website because they don’t want to associate with that person, even IF that person has good behavior.
3. It can be perfectly OK to downvote posts that are well-reasoned, on topic, and not misleading, because you’re worried about the incentives of those posts being highly upvoted.
All of these things are tradeoffs with decoupled conversation obviously, which has its’ own benefits. The website has to decide what values it stands for and will fight for, vs. what it will be flexible on depending on context. What I don’t think is OK is just to ignore context and assume that decoupling is always unambiguously the right call.
Zack didn’t say this. What he said was:
Which is compatible with thinking more details should be taken into account when the statistical arguments are applied in other contexts (in fact, I’m pretty sure this is what Zack thinks).
Discussion of abstract epistemology principles, which generalize across different contexts, is perhaps most of the point of this website...
Your points 1,2,3 have nothing to do with the epistemic problem of decoupling vs contextualizing, they have to do with political tradeoffs in moderating a forum; they apply to people doing contextualization in their analysis, too. I hate that the phrase “contextualizing norms” is being used to conflate between “all sufficiently relevant information should be used” and “everything should be about politics”.
This is probably because I don’t know what the epistemic problem is. I only know about the linked post, which defines things like this:
I sometimes round this off in my head to something like “pure decouplers think arguments should be considered only on their epistemic merits, and pure contextualizers think arguments should be considered only on their instrumental merits”.
There might be another use of decoupling and contextualizing that applies to an epistemic problem, but if so it’s not defined in the canonical article on the site.
My basic read of Zack’s entire post was him saying over and over “Well there might be really bad instrumental effects of these arguments, but you have to ignore that if their epistemics are good.” And my immediate reaction to that was “No I don’t, and that’s a bad norm.”
The proper words for that aren’t decoupling vs contextualizing, it’s denotative vs enactive language. An orthogonal axis to how many relevant contextual factors are supposed to be taken into account. You can require lots of contextual factors to be taken into account in epistemic analysis, or require certain enactments to be made independent of context.
Note, the original post makes the conflation I’m complaining about here too!
It might just make more sense to give this one up to word inflation and come up with new words. I’ll happily use the denotative vs. enactive language to point to this thing in the future, but I’ll probably have to put a footnote that says something like (what most people in the community refer to as decoupling vs. contextualizing.
It really looks like you’re defending the “appeal to consequences” as a reasonable way to think, and a respectable approach to public epistemology. But that seems so plainly absurd that I have to assume that I’ve misunderstood. What am I missing?
It might be that we just have different definitions of absurd and you’re not missing anything, or it could be that you’re taking an extreme version of what I’m saying.
To wit, my stance is that to ignore the consequences of what you say is just obviously wrong. Even if you hold truth as a very high value, you have to value it insanely more than any other value to never encounter a situation where you’re not compromising other things you value by ignoring the difference you could make by not saying something/lying/being careful about how to phrase things, etc.
Now obviously, you also have to consider the effect this type of thinking/communication has on discourse and the public ability to seek the truth—and once you’ve done that you’re ALREADY thinking about the consequences of what you say and what you allow others to say, and the task at that point is to simply weigh them against each other.
It’s important to distinguish the question of whether, in your own personal decisionmaking, you should ever do things that aren’t maximally epistemically good (obviously, yes); from the question of whether the discourse norms of this website should tolerate appeals to consequences (obviously, no).
It might be morally right, in some circumstances, to pass off a false mathematical proof as a true one (e.g. in a situation where it is useful to obscure some mathematical facts related to engineering weapons of mass destruction). It’s still a violation of the norms of mathematics, with good reason. And it would be very wrong to argue that the norms of mathematics should change to accommodate people making this (by assumption, morally right) choice.
To summarize: you’re destroying the substrate. Stop it.
I agree it’s important to realize that these things are fundamentally different.
A better norm of mathematics might be to NOT publish proofs that have obvious negative consequences like enabling weapons of mass destruction, and have a norm that actively disincentivizes people who publish that sort of research.
In other words, a norm might be to basically be epistemically pure, UNLESS the local instrumental considerations outweigh the cost to epistemic climate. This can be rounded down to “have norms about epistemics and break them sometimes,” but only if when someone points at edge cases where the norms are actively harmful, they’re challenged that sometimes the breaking of those norms is perfectly OK.
IE, if someone is using the norms of the community as a weapon, it’s important to point at that the norms are a means to an end, and that the community won’t blindly allow itself to be taken advantage of.
I think my actual concern with this line of argumentation is: if you have a norm of “If ‘X’ and ‘X implies Y’ then ‘Y’, EXCEPT when it’s net bad to have concluded ‘Y’”, then the werewolves win.
The question of whether it’s net bad to have concluded ‘Y’, is much, much more complicated than the question of whether, logically, ‘Y’ is true under these assumptions (of course, it is). There are many, many more opportunities for werewolves to gum up the works of this process, making the calculation come out wrong.
If we’re having a discussion about X and Y, someone moves to propose ‘Y’ (because, as it has already been agreed, ‘X’ and ‘X implies Y’), and then someone else says “no, we can’t do that, that has negative consequences!”, that second person is probably playing a werewolf strategy, gumming up the works of the epistemic substrate.
If we are going to have the exception to the norm at all, then there has to be a pretty high standard of evidence to prove that adding ‘Y’ to the discourse, in fact, has bad consequences. And, to get the right answer, that discussion itself is going to have to be up to high epistemic standards. To be trustworthy, it’s going to have to make logical inferences much more complex than “if ‘X’ and ‘X implies Y’, then ‘Y’”. What if someone objects to those logical inference steps, on the basis that they would have negative consequences? Where does that discussion happen?
In practice, these questions aren’t actually answered. In practice, what happens is that social epistemology doesn’t happen, and instead everything becomes about coalitional politics. Saying ‘Y’ doesn’t mean ‘Y is literally true’, it means you’re part of the coalition of people who wants consequences related to (but not even necessarily directly implied by!) the statement ‘Y’ to be put into effect, and that makes you blameworthy if those consequences hurt someone sympathetic, or that coalition is bad. Under such conditions, it is a major challenge to re-establish epistemic discourse, because everything is about violence, including attempts to talk about the “we don’t have epistemology and everything is about violence” problem.
We have something approaching epistemic discourse here on LessWrong, but we have to defend it, or it, too, becomes all about coalitional politics.
I want to note that LW definitely has exceptions to this norm, if only because of the boring, normal exceptions. (If we would get in trouble with law enforcement for hosting something you might put on LW, don’t put it on LW.) We’ve had in the works (for quite some time) a post explaining our position on less boring cases more clearly, but it runs into difficulty with the sort of issues that you discuss here; generally these questions are answered in private in a way that connects to the judgment calls being made and the particulars of the case, as opposed to through transparent principles that can be clearly understood and predicted in advance (in part because, to extend the analogy, this empowers the werewolves as well).
Another common werewolf move is to take advantage of strong norms like epistemic honesty, and use them to drive wedges in a community or push their agenda, while knowing they can’t be called out because doing so would be akin to attacking the community’s norms.
I’ve seen the meme elsewhere in the rationality community that strong and rigid epistemic norms are a good sociopath repellent, and it’s ALMOST right. The truth is that competent sociopaths (in the Venkat Rao sense) are actually great at using rigid norms for their own ends, and are great at using the truth for their own ends as well. The reason it might work well in the rationality community (besides the obvious fact that sociopaths are even better at using lies to their own ends than the truth) is that strong epistemics are very close to what we’re actually fighting for—and remembering and always orienting towards the mission is ACTUALLY an effective first line defense against sociopaths (necessary but not sufficient IMO).
99 times out of a 100, the correct way to remember what we’re fighting for is to push for stronger epistemics above other considerations. I knew that when I made the original post, and I made it knowing I would get pushback for attacking a core value of the community.
However, 1 time out of 100 the correct way to remember what you’re fighting for is to realize that you have to sacrifice a sacred value for the greater good. And when you see someone explicitly pushing the gray area by trying to get you to accept harmful situations by appealing to that sacred value, it’s important to make clear (mostly to other people in the community) that sacrificing that value is an option.
What specifically do you mean by “werewolf” here & how do you think it relates to the way Jessica was using it? I’m worried that we’re getting close to just redefining it as a generic term for “enemies of the community.”
By werewolf I meant something like “someone who is pretending be working for the community as a member, but is actually working for their own selfish ends”. I thought Jessica was using it in the same way.
That’s not what I meant. I meant specifically someone who is trying to prevent common knowledge from being created (and more generally, to gum up the works of “social decisionmaking based on correct information”), as in the Werewolf party game.
Worth noting: “werewolf” as a jargon term strikes me as something that is inevitably going to get collapsed into “generic bad actor” over time, if it gets used a lot. I’m assuming that you’re thinking of it sort of as in the “preformal” stage, where it doesn’t make sense to over-optimize the terminology. But if you’re going to keep using it I think it’d make sense to come up with a term that’s somewhat more robust against getting interpreted that way.
(random default suggestion: “obfuscator”. Other options I came up with required multiple words to get the point across and ended up too convoluted. There might be a fun shorthand for a type of animal or mythological figure that is a) a predator or parasite, b) relies on making things cloudy. So far I could just come up with “squid” due to ink jets, but it didn’t really have the right connotations)
That is a bit more specific than what I meant. In this case though, the second more broad meaning of “someone who’s trying to gum up the works of social decisionmaking” still works in the context of the comment.
Um, in context, this sounds to me like you’re arguing that by writing “Where to Draw the Boundaries?” and my secret (“secret”) blog, I’m trying to get people to accept harmful situations? Am I interpreting you correctly? If so, can you explain in detail what specific harm you think is being done?
Sorry, I was trying to be really careful as I was writing of not accusing you specifically of bad intentions, but obviously it’s hard in a conversation like this where you’re jumping between the meta and the object-level.
It’s important to distinguish a couple things.
1. Jessica and I were talking about people with negative intentions in the last two posts. I’m not claiming that you’re one of those people that is deliberately using this type of argument to cause harm.
2. I’m not claiming that it was the writing of those two posts that were harmful in the way we were talking about. I was claiming that the long post you wrote at the top of the thread where you made several analogies about your response, were exactly the sort of gray area situations where, depending on context, the community might decide to sacrifice it’s sacred value. At the same time, you were banking on the fact that it was a sacred value to say “even in this case, we would uphold the sacred value.” This has the same structure as the werewolf move mentioned above, and it was important for me to speak up, even if you’re not a werewolf.
Thanks for clarifying!
So, it’s actually not clear to me that deliberate negative intentions are particularly important, here or elsewhere? Almost no one thinks of themselves as deliberately causing avoidable harm, and yet avoidable harm gets done, probably by people following incentive gradients that predictably lead towards harm, against truth, &c. all while maintaining a perfectly sincere subjective conscious narrative about how they’re doing God’s work, on the right side of history, toiling for the greater good, doing what needs to be done, maximizing global utility, acting in accordance with the moral law, practicing a virtue which is nameless, &c.
Agreed. If I’m causing harm, and you acquire evidence that I’m causing harm, then you should present that evidence in an appropriate venue in order to either persuade me to stop causing harm, or persuade other people to coördinate to stop me from causing harm.
So, my current guess (which is only a guess and which I would have strongly disagreed with ten years ago) is that this is a suicidally terrible idea that will literally destroy the world. Sound like an unreflective appeal to sacred values? Well, maybe!—you shouldn’t take my word for this (or anything else) except to the exact extent that you think my word is Bayesian evidence. Unfortunately I’m going to need to defer supporting argumentation to future Less Wrong posts, because mental and financial health requirements force me to focus on my dayjob for at least the next few weeks. (Oh, and group theory.)
(End of thread for me.)
(responding, and don’t expect another response back because you’re busy).
I used to think this, but I’ve since realized that intentions STRONGLY matter. It seems like a system is fractal, the goals of the subparts/subagents get reflected in the goal of the broader system. People with aligned intentions will tend to shift the incentive gradients, as well people with unaligned intentions (of course, this isn’t a one way relationship, the incentive gradients will also shift the intentions).
I deny that your approach ever has an advantage over recognizing that definitions are tools which have no truth values, and then digging into goals or desires.
Thanks, these are some great points on some of the costs of decoupling norms! (As you’ve observed, I’m generally pretty strongly in favor of decoupling norms, but policy debates should not appear one-sided.)
I would want to distinguish “brings it up all the time” in the sense of “this user posts about this topic when it’s not relevant” (which I agree is bad and warrants moderator action) versus the sense of “this user posts about this topic a lot, and not on other topics” (which I think is generally OK).
If someone is obsessively focused on their narrow special interest—let’s say, algebraic topology—and occasionally comments specifically when they happen to think of an application of algebraic topology to the forum topic, I think that’s fine, because people reading that particular thread get the benefit of a relevant algebraic topology application—even if looking at that user’s posting history leaves one with an unsettling sense of, “Wow, this person is creepily obsessed with their hobbyhorse.”
I agree that this would be bad, but I think it’s usually possible to distinguish “twist[ing] other people’s posts towards a discussion of their thing” from a genuinely relevant mention of the thing that couldn’t (or shouldn’t) be reasonably expected to derail the discussion?
In the present case, my great-great-grandparent comment notes that the list-of-koans format lends itself to readers contributing their own examples in the comments, and I tried to give two such examples (trying to mimic the æsthetic of the OP by continuing the numbered list and Alice/Bob/Charlie/&c. character name sequence), one of which related the theme of the OP to the main point of one of my recent posts.
In retrospect, maybe I should’ve thought more carefully about how to phrase the proposed example in a way that makes the connection to the OP more explicit/obvious? (Probably-better version: “A meaningful ‘Yes’ answer to the question ‘Is G an H?’ requires a definition of H such that the answer could be ‘No’.”)
It’s true that, while composing the great-great-grandparent, I was kind of hoping that some readers would click through the link and read my earlier post, which I worked really hard on and which I think is filling in a gap in “A Human’s Guide to Words” that I’ve seen people be confused about. But I don’t see how this can reasonably be construed as an attempt to derail the discussion? Like, I ordinarily wouldn’t expect a brief comment of the form “Great post! Here’s a couple more examples that occurred to me, personally” to receive any replies in the median case.
(Although unfortunately, it empirically looks like the discussion did, in fact, get derailed. I feel bad for Scott G. that we’re cluttering up his comment section like this, but I can’t think of anything I wish I had done differently other than wording the great-great grandparent more clearly, as mentioned in the paragraph-before-last. Given Vanessa’s reply, I felt justified in writing my counterreply … and here we are.)
Agreed, the moderators are God and their will must be obeyed.
So, the dynamic you describe here definitely exists, but I actually think it’s a pretty serious problem for our collective sanity: if some truths happen to lie outside of Society’s Overton window, then systematic truthseekers (who want to collect all the truths, not just the majority of them that are safely within the Overton window) will find themselves on the wrong side of Respectability, and if people who care about being Respectable (and thereby having power in Society) can’t even talk to people outside the Overton window (not even agree with—just talk to, using, for example, a website), then that could have negative instrumental consequences in the form of people with power in Society making bad policy decisions on account of having inaccurate beliefs.
I want to write more about this in the future (albeit not on Less Wrong), but in the meantime, maybe see the immortal Scott Alexander’s “Kolmogorov Complicity And The Parable Of Lightning” for an expression of similar concerns:
Regarding “Kolmogorov complicity”, I just want to make clear that I don’t want to censor your opinion on the political question. Such censorship would only serve to justify your notion that “we only refuse to believe X because it’s heresy, while any systematic truthseeker would believe X”, which is something I very much disagree with. I might be interested in discussing the political question if we were allowed to do it. It is the double bind of, not being able to allowed to argue with you on the political quesiton while having to listen to you constantly hinting at it, is what bugging me. Then again, I don’t really have a good solution.
I’ve read Zack’s blog (the one that is not under the name Zack M. Davis), and his hobbyhorse has to do with transgender issues and gender categories. However, even when he is writing directly about the matter on his own blog, I am unclear what he is actually saying about these issues. There is still a certain abstractness and distance from the object level.
Just FYI.
(I had originally strong-downvoted the parent because I don’t think it’s relevant, but alas, it looks like the voting population disagreed.)
Wait, really? Am I that bad of a writer??
Well, yes. I’m a rationalist. What do you expect?
Engagement with the object level.
It is nearly impossible for a human being to write a correct program just by thinking really hard. And that is a situation where everything is cut and dried, mathematically exact. Mathematicians do fairly well at proving theorems rigorously, but they have an easier task than programmers, for they only have to convince people, not machines. Outside of those domains, abstract argument on its own is nothing more than abstract art, unless it is continually compared with the object level and exposed to modus delens.
And the object level is what we’re all doing this for, or what’s the point?
What’s the point of concrete ideas, compared to more abstract ideas? The reasons seem similar, just with different levels of grounding in experience, like with a filter bubble that you can only peer beyond with great difficulty. This situation is an argument against emphasis on the concrete, not for it.
(I think there’s a mixup between “meta” and “abstract” in this subthread. It’s meta that exists for the object level, not abstractions. Abstractions are themselves on object level when you consider them in their own right.)
Everything is on the object level when considered in its own right.
Abstractions are a central example of things considered on the object level, so I don’t understand them as being in opposition to the object level. They can be in opposition to more concrete ideas, those closer to experience, but not to being considered on object level.
The point is the relationship between the levels of the ladder of abstraction. Outside of mathematics and programming, long arguments at high levels go wrong without being checked against experience. If experience contradicts, so much the worse for the argument.
Unsure of mathematics, but software development goes wrong in exactly the same way—designs and ideas too far removed from the silicon go wildly wrong and don’t match at all what actually gets built. Eventually, the code wins and the arguments lose (or more often, the code fails and everybody loses).