Said, I suspect you have had many discussions here on this and related topics in the past, mostly along the lines of @Zack_M_Davis’s “Lack of Social Grace is an Epistemic Virtue”. As such, I doubt what I am writing here will be novel to you, but I shall present it anyway. [1]
As I see it, there are four main (and mostly independent) reasons why “statements should be at least two of true, necessary/useful, and kind” [2] is a good norm to enforce in a community: [3]
By requiring that statements are either necessary or kind in addition to merely being true, leaders/moderators of a community can increase the overall level of kindness in the discussions that take place, which helps move the conversations away from a certain adversarial frame common on the Internet (in which users mostly try to one-up one another by signaling how intelligent they are) and towards a more cooperative stance which scales better, allows for a more robust process of (actual, real-world) community building [4], and improves user retention and satisfaction.
Since different users have differing levels of tolerance for unkind statements or attempts (written by fellow users) to other-optimize their lives, the types of people most likely to be upset by and eventually leave the community are those who were most sensitive to such statements. This generates a process of group evaporative cooling that manifests itself through a positive feedback loop of the average kindness level of comments going down and the users least tolerant of unkindness/off-topic or other-optimizing comments leaving. Over the mid-to-long term, the only people left in the community would be a small minority of users with really thick skins who care about the truth above all else, along with a sizeable majority of people who enjoy intentionally insulting and lambasting others under the guise of “I’m only calling it as I see it.” That would not be a fun place to participate, from the perspective of the vast majority of potential users that would otherwise be willing to contribute. [5]
Robust norms against optimizing solely for truth at the expense of other relevant details (such as, for example, the likely impact of one’s statement on the intended readers) provide a powerful deterrent to users who often teeter on the edge of the provocative/acceptable boundary. This is important because human rationality is bounded and the individual ascertainments of users that their comments are true and positive-value are not always reliable. As such, the community, on the whole, ends up erring on the side of caution. While this might not maximize the expectation of the aggregate value created (by incorrectly-in-retrospect prohibiting valuable comments from being posted), it could nonetheless be the optimal choice because it significantly lowers the likelihood and prevalence of overly harmful content (which, if disvalued strongly enough, becomes a more important consideration than maximizing the utility of the average state). [6]
Even if we were to stipulate that users could always (and in a cost-free manner) correctly determine that their comments are true before posting them, the communication of true facts to other people is not always a good idea and sometimes serves to create harm or confusion. These ideas were expanded upon clearly and concretely in Phil Goetz’s classic post on “Reason as memetic immune disorder”, which ably explains how “the landscape of rationality is not smooth; there is no guarantee that removing one false belief will improve your reasoning instead of degrading it.” [7] The norms we are talking about attempt to counteract these problems by requiring that users think about the impact, helpfulness, and necessity of their comments (as they relate to other people) before posting them.
Of course, this is only one side of the equation, namely “why require that comments are necessary/helpful or kind in addition to being true?” There is also the reverse question, regarding “why is it okay for a statement to be helpful and kind, if it in fact is not true?” [8] This deals mostly with white lies or other good-faith actions and comments by users to help others, and it is generally seen as bad to punish thoughtful attempts to be kind (as that predictably reduces the overall level of kindness in the interactions community members have, which is bad as per the discussion above).
Thinking about it in more detail, it feels like a type/category error to lump “true” and “necessary/useful” (which are qualities having to do with the underlying territory that a statement refers to) in with “kind” (which deals with the particular stance the user writing the statement takes towards intended readers). Since no human has direct API access to undeniable truths and everything we think about must instead be mediated by our imperfect and biased minds, the best we can do is say what we think is true while hoping that it is in fact correct. The strongest argument for nonetheless stating the norm as having to do with “truth” is that it likely invites fewer debates over the intent and the content of the minds of others than something like “honesty” would (and, from the perspective of a moderator of an internet community, norms against mind-reading and ascribing bad attitudes to strangers are crucial for the sustained thriving of the community).
Given your stated opposition to guideline 5 of Duncan Sabien’s “Basics of Rationalist Discourse,” I can already expect a certain subset of these arguments not to be persuasive to you.
I suspect you would likely say this is not okay, for reasons of Entangled Truths, Contagious Lies and because of a general feeling that telling the truth is simply more dignified than optimizing for anything else.
There is also the reverse question, regarding “why is it okay for a statement to be helpful and kind, if it in fact is not true?” … I suspect you would likely say this is not okay, for reasons of Entangled Truths, Contagious Lies and because of a general feeling that telling the truth is simply more dignified than optimizing for anything else.
Knowingly saying things that are not true is bad, yes. (It might be good sometimes… when communicating with enemies. There’s a reason why the classic thought experiment asks whether you would lie to the Gestapo!) I don’t see any even somewhat plausible justification for doing such things in a discussion on Less Wrong. There is no need to reach for things like “dignity” here… are we really having the “is posting lies on Less Wrong bad, actually? what if it’s good?” discussion?
This deals mostly with white lies or other good-faith actions and comments by users to help others, and it is generally seen as bad to punish thoughtful attempts to be kind (as that predictably reduces the overall level of kindness in the interactions community members have, which is bad as per the discussion above).
Seen by whom? I certainly hope that nobody here on LW thinks that it’s bad to punish “thoughtful attempts to be kind” that involve lying. Do you think that?!
In any case, I can’t help but notice that you spend a lot of time discussing the “kind” criterion, approximately one paragraph (and a rather baffling one, at that) on the “true” criterion, but you say almost nothing about the “necessary” criterion. And when you do mention it, you actually shift, unremarked, between several different versions of it:
necessary/useful
Right away we’ve got confusion. What’s the criterion—“necessary”, or “useful”? Or either? Or both? These are two very different concepts!
necessary/helpful
Now we’ve added “helpful” to the mix. This again is different from both “necessary” and “useful”.
And what do any of these things mean, anyhow? What is “necessary”? What is “useful”? Who decides, and how? (Ditto for “kind”, although that one is at least more obviously vague and prone to subjectivity of interpretation, and enough has been said about that already.)
In any case, I’ve writtenaboutthisproblembefore. Nothing’s changed since then. “Don’t post falsehoods” is a good rule (with common-sense elaborations like “if it’s demonstrated that something you thought was true is actually false, post a correction”, etc.). This stuff about “kind”, on the other hand, much less “necessary” (or “useful” or any such thing)… Well, let me put it this way: when anyone can give anything like a coherent, consistent, practically applicable definition of any of these things, then I’ll consider whether it might possibly be a good idea to take them as ideals. (To say nothing of elevating them to rule status—which, at least, Less Wrong has not done. Thankfully.)
FWIW, I think of it as a useful mnemonic and more something like “here are three principal components on which I would score a comment before posting to evaluate whether posting it is a good idea”. I think the three hypothesized principal components are decent at capturing important aspects of reality, but not perfect.
I think on LW the “is it true”/”logically valid”/”written with truth-seeking orientation” principal component is more important than on other forums.
I also think de-facto we have settled on the “is it kind”/”written with caring”/”avoiding causing emotional hurt” direction being less important on LW, though this is a topic of great long-standing disagreement between people on the site. To the substantial but still ultimately limited degree I can set norms on the site, I think the answer to this should probably roughly be “yes, I think it makes sense for it to be less important around here, but on the margin I think I would like there to be more kindness, but like, not enormously more”.
I feel like LW has varied a surprising amount on how much it values something like the “useful”/”relevant”/”important”/”weightiness”/”seriousness” principal component. LW is not a humor site, and there isn’t a ton of frivolous posting, but we do have a lot of culture and people do have fun a bunch. My current take is that LW does care about this dimension more than most other internet forums, but a lot less than basically all professional forums or scientific journals or anything in that reference class.
Not sure whether this helps this discussion. I found it helpful to think through this, and figured I would share more of the principal-component framing of this, which is closer to how I think about it. Also, maybe you don’t see how these three things might meaningfully be considered three principal components of comment quality, or don’t see how they carve reality at its joints, which IDK, I am not like enamored with this frame, but I found that it pays decent rent in my ability to predict the consequences of posting a comment.
I feel like LW has varied a surprising amount on how much it values something like the “useful”/”relevant”/”important”/”weightiness”/”seriousness” principal component. LW is not a humor site, and there isn’t a ton of frivolous posting, but we do have a lot of culture and people do have fun a bunch. My current take is that LW does care about this dimension more than most other internet forums, but a lot less than basically all professional forums or scientific journals or anything in that reference class.
Part of the problem (not the entirety of it by any means, but a substantial part) is that the claim that these things (‘“useful”/”relevant”/”important”/”weightiness”/”seriousness”’) are somehow one thing is… extremely non-obvious. To me they seem like several different things. I don’t even really know what you mean by any of them (I can make some very general guesses, but who knows if my guesses are even close to what you’ve got in mind?), and I definitely don’t have the first clue how to interpret them as some kind of single thing.
(Really there are so many problems with this whole “kind/true/necessary, pick 2” thing that it seems like if I start listing them, we’ll be here all day. Maybe a big part of the problem is that I’ve never seen it persuasively—or even seriously—defended, and yet it’s routinely cited as if it’s just an uncontroversially good framework. It seems like one of those things that has so much intuitive appeal that most people simply refuse to give it any real thought, no matter how many and how serious are the problems that it is demonstrated to have, because they so strongly don’t want to abandon it. I do not include you among that number, to be clear.)
To me they seem like several different things. I don’t even really know what you mean by any of them (I can make some very general guesses, but who knows if my guesses are even close to what you’ve got in mind?), and I definitely don’t have the first clue how to interpret them as some kind of single thing.
My guess is I could communicate the concept extensionally by just pointing to lots of examples, but IDK, I don’t super feel like putting in the effort right now. I broadly agree with you that this feels like a framework that is often given too much weight, but I also get decent mileage out of the version I described in my comment.
To be clear, it wasn’t my intention to suggest that I expected you to clarify, right here in this comment thread, this concept (or concepts) that you’re using. I don’t expect that would be a good use of your time (and it seems like you agree).
My point, rather, was that it’s really not clear (to me, and I expect—based on many concrete instances of experience!—to many others, also) what these words are being used to mean, whether this is a single concept or multiple concepts, what exactly those concepts are, etc. Quite apart from the concrete question “what does that mean”, the fact that I don’t have even any good guesses about the answer (much less anything resembling certainty) makes the concept in question a poor basis for any evaluation with practical consequences!
I broadly agree with you that this feels like a framework that is often given too much weight, but I also get decent mileage out of the version I described in my comment.
I don’t doubt that, but I do want to point out that a set of criteria applied to oneself, and the same set of criteria applied to others, will have very different consequences.
Knowingly saying things that are not true is bad, yes. (It might be good sometimes… when communicating with enemies. There’s a reason why the classic thought experiment asks whether you would lie to the Gestapo!)
The thought experiment is fine as a stand-alone dilemma for those who have some particularly powerful deontological commitment to truthtelling, I suppose. Since I do not number myself among them, deciding to lie in that case is straightforwardly the choice I would make. And, of course, lying to enemies is indeed often useful (perhaps even the default option to choose).
But I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option. There have, of course, been a great deal of words spilled on this matter here on LW that ultimately, in my view, veered very far away from the rather straightforward conclusion that when we analyze the expected consequences (which factor into my decisions at least to some extent, and have in any case been part of the ethos of LessWrong for more than 16 years) of our words to others and we realize that the optimal choice is not the one that maximizes our estimation of truth value (or even minimizes it), that is the action we select. Since we are imperfect beings running on corrupted hardware, we can (and likely should) make affordances for the fact that we are likely worse at modeling the world than we would intuitively expect to be, perhaps by going 75% of the way to consequentialism instead of committing ourselves to it fully?
But that merely means adjusting the ultimate computation to take such considerations into account, and definitely does not even come close to meaning you write down your bottom line as “never knowingly say things that are not true to friends or strangers” before actually analyzing the situation in front of you on its own merits.
I don’t see any even somewhat plausible justification for doing such things in a discussion on Less Wrong. There is no need to reach for things like “dignity” here… are we really having the “is posting lies on Less Wrong bad, actually? what if it’s good?” discussion?
First as a more meta note, incredulity is an argumentative superweapon and thus makes for a powerful rhetorical tool. I do not think there is anything inherently wrong with using it, of course, but in this particular case I also do not think it is bad to point this out so readers are aware, and adjust their beliefs accordingly.
Now on to the object level, LessWrong is indeed founded on a set of principles that center around truthtelling to such an extent that posting lies is indeed bad here. But the justification for that is not a context-freeFully General Counterargument in favor of maximal truth-seeking, but rather a basic observation that violating the rules of the site will predictably and correctly lead to negative outcomes for everyone involved (from yourself getting banned, to the other users you are responding being rightfully upset about the violation of norms of expected behavior, to the moderators having to waste time getting rid of you, and to second-order consequences for the community in terms of weaking the effect of the norms on constraining other users, which would make future interactions between them more chaotic and overall worse off).
But zooming out for a second, the section of your comment that I quoted is mostly out of scope of my own comment, which explained the general reasons why the particular norms at issue are good for communities. You have focused in on a specific application of those norms which would be unreasonable (and is in any case merely the final paragraph of my comment, which I spent much less time on since it is by far the least applicable to the case of LessWrong). There is no contradiction here: as in many other areas (including law), the specific controls the general, and particular considerations often force general principles to give way. It is yet another illustration of why those principles are built to be flexible in the first place, which accords with my analysis above.
Seen by whom?
Seen by the users of the site, at a general level, and in particular the overwhelming majority of lurkers that form a crucial part of the base (and influence the content on the site through their votes) at any given time, in no small part because of their potential to become more productive generators of value in the future (of course, that potential only becomes actualized if those users are not driven out beforehand by an aggregate lack of adherence to norms akin to the ones discussed here, as per my explanation in parts 1 and 2 above).
There is a tremendous selection bias involved in only taking into account the opinions of those that comment; while these people may be more useful for the community (and many of them may have been here for a longer time and thus feel that it is their garden too), the incredulity you offer up at the possible existence of disagreement on this matter does not seem to have come to terms with the community’s aggregate opinions on these topics when they came up 1-1.5 years ago (I’d rather not relitigate those matters, as to respect the general community consensus that they are in the past, but I should give at least one illustrative example that forms the prototype for what I was talking about).
I’ll take the rest of your comment slightly out of order, so as to better facilitate the explanation of what’s involved.
you say almost nothing about the “necessary” criterion
This is incorrect as a factual matter. There are 3 crucial moments where the “necessary” criterion came into play in my explanation of the benefits of these norms:
by discouraging other-optimizing (this is mentioned in point 2 above) and other related behaviors as being neither necessary nor useful, even in situations where the user might honestly (and correctly) believe that they are merely “telling it as it is.”
approximately one paragraph (and a rather baffling one, at that) on the “true” criterion
I would wager that LessWrong understands the concept of “truth” to a far greater extent than virtually any other place on the internet. I did not expand upon this because I had nothing necessary or useful to say that has not already been enshrined as part of the core of the site.
By contrast, what I explain in one of my footnotes is that “true” and “kind,” on their face, do not belong in the same category because they essentially have different conceptual types. Yet, as I wrote above, there is an important reason why the norms are typically stated in the form Davidmanheim chose rather than through the replacement of “honestly” for “true” (which concedingly would make the conceptual types the same).
And when you do mention it, you actually shift, unremarked, between several different versions of it
There does not seem to be a natural boundary that carves reality at the joints in a manner that leaves the different formulations in different boxes, given the particular context at play here (namely, user interactions in an online community). From a literal perspective, it is virtually never “necessary” for a random internet person to tell you anything, unless for some reason you desperately need emergency help and they are the only ones that can supply you with the requisite information for you.
As such, “necessary” and “useful” play virtually the same role in this context. “Helpful” does as well, although it blends in part of (but not the entirety) of the “kindness” rubric that we’ve already gone over.
These are two very different concepts!
As mentioned above, in the case of comment sections on LessWrong? They are not very different at all.
And what do any of these things mean, anyhow? What is “necessary”? What is “useful”? Who decides, and how? (Ditto for “kind”, although that one is at least more obviously vague and prone to subjectivity of interpretation, and enough has been said about that already.)
Quite to the contrary: the actual important problem at hand has indeed been written about before, about 9 years earlier than all of the links you just gave (except for possibly the last one, where “The topic or board you are looking for appears to be either missing or off limits to you”). The type of conceptual analysis your questions require will not output a short encoding of the relevant category in thingspace in the form of a nice, clean definition because the concepts themselves did not flow out of such a (necessary & sufficient conditions-type) definition to begin with. That would be precisely the same type of type error as the one I have already identified earlier in this comment.They are the result of a changing and flowing communal understanding of ideas that happens organically instead of in a guided or designed manner.
The people who decide are the moderators (not just in terms of having control of the ban-hammer, but also by setting soft rules or general expectations for what is and isn’t acceptable, commendable, upvote-worthy through their communications of what LessWrong is about). And they decide how to set those rules, norms, and expectations based on their own experiences and judgment, while taking into account the continuous feedback that the community members give regarding the quality of content on the site (as per my earlier explaination above in this comment).
Well, let me put it this way: when anyone can give anything like a coherent, consistent, practically applicable definition of any of these things, then I’ll consider whether it might possibly be a good idea to take them as ideals.
Putting aside my previous two paragraphs, it is, of course, good for systems of norms, just like most incentive systems overall, to be clear and legible enough that users end up understanding them and become free to optimize within their framework. But Internet moderation cannot function with the same level of consistency and ease of application (and, consequently, lack of discretion) as something like, say, a legal system, let alone the degree of coherence that you seem to demand in order to even consider whether these norms could possibly be a good idea.
It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible.
As such, we don’t have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes.
I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is.
I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option
There’s a philosophically deep rationale for this, though: to a rational agent, the value of information is nonnegative. (Knowing more shouldn’t make your decisions worse.) It follows that if you’re trying to misinform someone, it must either the case that you want them to make worse decisions (i.e., they’re your enemy), or you think they aren’t rational.
To clarify, I straightforwardly do not believe any human being I have ever come into contact with is rational enough for information-theoretic considerations like that to imply that something other than telling the truth will necessarily lead to them making worse decisions.
The philosophical ideal can still exert normative force even if no humans are spherical Bayesian reasoners on a frictionless plane. The disjunction (“it must either the case that”) is significant: it suggests that if you’re considering lying to someone, you may want to clarify to yourself whether and to what extent that’s because they’re an enemy or because you don’t respect them as an epistemic peer. Even if you end up choosing to lie, it’s with a different rationale and mindset than someone who’s never heard of the normative ideal and just thinks that white lies can be good sometimes.
Yes, this seems correct. With the added clarification that “respecting [someone] as an epistemic peer” is situational rather than a characteristic of the individual in question. It is not that there are people more epistemically advanced than me which I believe I should only ever tell the full truth to, and then people less epistemically advanced than me that I should lie to with absolute impunity whenever I start feeling like it. It depends on a particularized assessment of the moment at hand.
I would suspect that most regular people who tell white lies (for pro-social reasons, at least in their minds) generally do so in cases where they (mostly implicitly and subconsciously) determine that the other person would not react well to the truth, even if they don’t spell out the question in the terms you chose.
Is it the case that if there are two identically-irrational / -boundedly-rational agents, then sharing information between them must have positive value?
Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.
To all that stuff about “consequentialism” and “writing down your bottom line”, I will say only that consequentialism is meaningless without values and preferences. I value honesty. I prefer that others with whom I interact do likewise.
Seen by whom?
Seen by the users of the site, at a general level, and in particular the overwhelming majority of lurkers
Now, how could you possibly know that?
you say almost nothing about the “necessary” criterion
This is incorrect as a factual matter. There are 3 crucial moments where the “necessary” criterion came into play in my explanation of the benefits of these norms:
Given that you didn’t actually make clear that any of those things were about the “necessary” condition (indeed you didn’t even mention the word in any of points #2, #3, or #4 in the grandparent—I did check), I think it’s hardly reasonable to call what I said “incorrect as a factual matter”. But never mind that; you’ve clarified your meaning now, I suppose…
Taking your clarification into account, I will say that I find all three of those points to be some combination of incomprehensible, baffling, and profoundly misguided. (I will elaborate if you like, though the details of these disagreements seem to me to be mostly a tangent.)
And when you do mention it, you actually shift, unremarked, between several different versions of it
There does not seem to be a natural boundary that carves reality at the joints in a manner that leaves the different formulations in different boxes, *given the particular context at play here (namely, user interactions in an online community). *From a literal perspective, it is virtually never “necessary” for a random internet person to tell you anything, unless for some reason you desperately need emergency help and they are the only ones that can supply you with the requisite information for you.
Yep. That certain is the problem. And therefore, the conclusion is that this concept is hopelessly confused and not possible to usefully apply in practice—right?
As such, “necessary” and “useful” play virtually the same role in this context. “Helpful” does as well, although it blends in part of (but not the entirety) of the “kindness” rubric that we’ve already gone over.
… I guess not.
Look, you certainly can say “I declare that what I mean by ‘necessary’ is ‘helpful’ and/or ‘useful’”. (Of course then you might find yourself having to field the question of whether “helpful” and “useful” are actually the same thing—but never mind that.) The question is, does everyone involved understand this term “necessary” in the same way?
Empirically: no.
except for possibly the last one, where “The topic or board you are looking for appears to be either missing or off limits to you”
Ah, sorry, that’s indeed a restricted forum. Here’s the post, with appropriate redactions:
They are the result of a changing and flowing communal understanding of ideas that happens organically instead of in a guided or designed manner.
Precisely the problem (well, one of the several problems, anyhow) is that there is not, in fact, a “communal understanding” of these concepts, but rather a whole bunch of different understandings—and given the words and concepts we’re talking about here, there’s no hope of it ever being otherwise.
Internet moderation
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
To all that stuff about “consequentialism” and “writing down your bottom line”, I will say only that consequentialism is meaningless without values and preferences. I value honesty. I prefer that others with whom I interact do likewise.
As I mentioned, this fits in as a “particularly powerful commitment to truthtelling,” even as in your case it does not seem to be entirely deontological. Certainly a valid preference (in so much as preferences can even be described as “valid”), but orthogonal to the subset of empirical claims relating to how community interactions would likely play out under/without the norms under discussion.
Now, how could you possibly know that?
This is one of the rare occasions in which arguing by definition actually works (minus the general issues with modal logics). The lurkers are the members of the community that are active in terms of viewing (and even engaging with in terms of upvoting/downvoting) the content but who do not make their presence known through comments or posts. As per statistics I have seen on the site a few times (admittedly, most of them refer back to the period around ~2017 or so, but the phenomenon being discussed here has only been further accentuated with the more recent influx of users), LW is just like other Internet communities in terms of the vast majority of the userbase being made up of lurkers, most of whom have newer accounts and have engaged with the particularities of the subculture of longer-lasting members to a lesser extent. Instead, they come from different places on the Internet, with different background assumptions that are much more in line with the emphasis on kindness and usefulness (as LessWrong is more of an outlier among such communities in how much it cares about truth per se).
Yep. That certain is the problem. And therefore, the conclusion is that this concept is hopelessly confused and not possible to usefully apply in practice—right?
You seem to have confused the point about the fact that one literal meaning of “necessary” is that it refers only to cases of genuine (mostly medical) emergencies. It is not that this word is confused, but rather that, as with many parts of imperfect human languages that are built up on the fly instead of being designed purposefully from the beginning, it has different meanings in different contexts. It is overloaded, so the particular meaning at play depends on context clues (which are, by their nature, mostly implicit rather than explicitly pointed out) such as what the tone of the communication has been thus far, what forum it is taking place in, whether it is a formal or informal setting, whether it deals with topics in a mathematical or engineering or philosophical way, etc (this goes beyond mere Internet discussions and in fact applies to all aspects of human-to-human communication). This doesn’t mean the word is “bad”, as it may be (and often is) still the best option available in our speech to express the ideas we are getting across.
Whether it is “necessary” for someone to point out that some other person is making a particular error (say, for example, claiming that a particular piece of legislation was passed in 1883 instead of the correct answer of 1882) depends on that particular context: if it is a casual conversation between friends in which one is merely explaining to the other that there have been some concrete examples of government-sponsored anti-Chinese legislation, then butting in to correct that is not so “necessary”; by contrast, if this is a PhD student writing their dissertation on this topic, and you are their advisor, it becomes much more “necessary” to point out their error early on.
The question is, does everyone involved understand this term “necessary” in the same way?
Certainly not! It would be quite amazing (and frankly rather suspicious) if this were to happen; consider just how many different users there are. The likelihood of every single one of them having the same internal conception of the term is so low as to be negligible. But, of course, it is not necessary to reach this point, nor is it productive to do so, as it requires wasting valuable time and resources to attempt to pin down concepts through short sentences in a manner that simply cannot work out. What matters is whether sufficiently many users understand the ideas well enough to behave according to them in practice; small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).
Empirically: no.
Yes, you are right to say this, as it seems you do not believe you have the same internal representation of these words as you think other people do (I fully believe that you are genuine in this). But, as mentioned above, the mere existence of a counterexample is not close to sufficient to bring the edifice down, and empirically, your confusions are more revealing about yourself and the way your mind works than they are a signal of underlying problems with the general community understanding of these matters; this would be an instance of improperly Generalizing From One Example.
Ah, sorry, that’s indeed a restricted forum. Here’s the post, with appropriate redactions:
Precisely the problem (well, one of the several problems, anyhow) is that there is not, in fact, a “communal understanding” of these concepts, but rather a whole bunch of different understandings—and given the words and concepts we’re talking about here, there’s no hope of it ever being otherwise.
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
“What is allowed by the rules”? What rules? There seems to be a different confusion going on in your paragraph here. What are being discussed are the rules and norms in the community; it’s not that these are allowed or prohibited by something else, but rather they are the principles that allow or prohibit other behaviors by users in the future. The legibility of rules and consistency of moderation are absolutely part of the point because they weigh (to a heavy but not overwhelming extent) on whether such rules and norms are desirable, and as a natural corollary, whether a particular system of such norms is ideal. This is because they impact the empirical outcomes of enacting them, such as whether users will understand them, adhere to them, promote them etc. Ignoring the consequences of that is making the same type of error that has been discussed in this thread in depth.
This is one of the rare occasions in which arguing by definition actually works
You then followed this with several empirical claims (one of which was based not on any concrete evidence but just on speculation based on claimed general patterns). No, sorry, arguing by definition does not work here.
It is not that this word is confused … It is overloaded
Well, words can’t really be confused, can they? But yes, the word “necessary” is overloaded, and some or possibly most or maybe even all of its common usages are also vague enough that people who think they’re talking about the same thing can come up with entirely different operationalizations for it, as I have empirically demonstrated, multiple times.
Whether it is “necessary” for someone to point out that some other person is making a particular error … depends on that particular context
That is not the problem. The problem (or rather—again—one of multiple problems) is that even given a fixed context, people easily can, and do, disagree not only on what “necessary” means in that context, but also on whether any given utterance fits that criterion (on which they disagree), and even on the purpose of having the criterion in the first place. This, again, is given some context.
What matters is whether sufficiently many users understand the ideas well enough to behave according to them in practice
It’s pretty clear that they don’t. I mean, we’ve had how many years of SSC, and now ACX (and, to a lesser extent, also DSL and Less Wrong) to demonstrate this?
But, as mentioned above, the mere existence of a counterexample is not close to sufficient to bring the edifice down, and empirically, your confusions are more revealing about yourself and the way your mind works than they are a signal of underlying problems with the general community understanding of these matters; this would be an instance of improperly Generalizing From One Example.
Sorry, no, this doesn’t work. I linked multiple examples of other people disagreeing on what “necessary” means in any given context, including Scott Alexander, who came up with this criterion in the first place, disagreeing with himself about it, in the course of the original post that laid out the concept of moderating according to this criterion!
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
“What is allowed by the rules”? What rules? There seems to be a different confusion going on in your paragraph here.
Uh… I think you should maybe reread what I’m responding to, there.
people who think they’re talking about the same thing can come up with entirely different operationalizations for it, as I have empirically demonstrated, multiple times
In any case, as I have mentioned above, differences in opinion are not only inevitable, but also perfectly fine as long as they do not grow too large or too common. From my comment above: “small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).”
It’s pretty clear that they don’t. I mean, we’ve had how many years of SSC, and now ACX (and, to a lesser extent, also DSL and Less Wrong) to demonstrate this?
LessWrong and SSC (while it was running) have maintained arguably the highest aggregate qualities of commentary on the Internet, at least among relatively large communities that I know about. I do not see what is being demonstrated, other than precisely the opposite of the notion that norms sketched out in natural language cannot be followed en masse or cannot generate the type of culture that promotes such norms to new users (or users just transitioning from being lurkers to generating content of their own).
Uh… I think you should maybe reread what I’m responding to, there.
I think I’m going to tap out now. Unfortunately, I believe we have exhausted the vast majority of useful avenues of discourse on this matter.
From my comment above: “small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).”
But these aren’t small disagreements. They’re big disagreements.
I do not see what is being demonstrated, other than precisely the opposite of the notion that norms sketched out in natural language cannot be followed *en masse *or cannot generate the type of culture that promotes such norms to new users (or users just transitioning from being lurkers to generating content of their own).
The SSC case demonstrates precisely that these particular norms, at least, which were sketched out in natural language, cannot be followed en masse or, really, at all, and that (a) trying to enforce them leads to endless arguments (usually started by someone responding to someone else’s comment by demanding to know whether it was kind or necessary—note that the “true” criterion never led to such acrimony!), and (b) the actual result is a steady degradation of comment (and commenter) quality.
The Less Wrong case demonstrates that the whole criterion is unnecessary. (Discussions like this one also demonstrate some other things, but those are secondary.)
Said, I suspect you have had many discussions here on this and related topics in the past, mostly along the lines of @Zack_M_Davis’s “Lack of Social Grace is an Epistemic Virtue”. As such, I doubt what I am writing here will be novel to you, but I shall present it anyway. [1]
As I see it, there are four main (and mostly independent) reasons why “statements should be at least two of true, necessary/useful, and kind” [2] is a good norm to enforce in a community: [3]
By requiring that statements are either necessary or kind in addition to merely being true, leaders/moderators of a community can increase the overall level of kindness in the discussions that take place, which helps move the conversations away from a certain adversarial frame common on the Internet (in which users mostly try to one-up one another by signaling how intelligent they are) and towards a more cooperative stance which scales better, allows for a more robust process of (actual, real-world) community building [4], and improves user retention and satisfaction.
Since different users have differing levels of tolerance for unkind statements or attempts (written by fellow users) to other-optimize their lives, the types of people most likely to be upset by and eventually leave the community are those who were most sensitive to such statements. This generates a process of group evaporative cooling that manifests itself through a positive feedback loop of the average kindness level of comments going down and the users least tolerant of unkindness/off-topic or other-optimizing comments leaving. Over the mid-to-long term, the only people left in the community would be a small minority of users with really thick skins who care about the truth above all else, along with a sizeable majority of people who enjoy intentionally insulting and lambasting others under the guise of “I’m only calling it as I see it.” That would not be a fun place to participate, from the perspective of the vast majority of potential users that would otherwise be willing to contribute. [5]
Robust norms against optimizing solely for truth at the expense of other relevant details (such as, for example, the likely impact of one’s statement on the intended readers) provide a powerful deterrent to users who often teeter on the edge of the provocative/acceptable boundary. This is important because human rationality is bounded and the individual ascertainments of users that their comments are true and positive-value are not always reliable. As such, the community, on the whole, ends up erring on the side of caution. While this might not maximize the expectation of the aggregate value created (by incorrectly-in-retrospect prohibiting valuable comments from being posted), it could nonetheless be the optimal choice because it significantly lowers the likelihood and prevalence of overly harmful content (which, if disvalued strongly enough, becomes a more important consideration than maximizing the utility of the average state). [6]
Even if we were to stipulate that users could always (and in a cost-free manner) correctly determine that their comments are true before posting them, the communication of true facts to other people is not always a good idea and sometimes serves to create harm or confusion. These ideas were expanded upon clearly and concretely in Phil Goetz’s classic post on “Reason as memetic immune disorder”, which ably explains how “the landscape of rationality is not smooth; there is no guarantee that removing one false belief will improve your reasoning instead of degrading it.” [7] The norms we are talking about attempt to counteract these problems by requiring that users think about the impact, helpfulness, and necessity of their comments (as they relate to other people) before posting them.
Of course, this is only one side of the equation, namely “why require that comments are necessary/helpful or kind in addition to being true?” There is also the reverse question, regarding “why is it okay for a statement to be helpful and kind, if it in fact is not true?” [8] This deals mostly with white lies or other good-faith actions and comments by users to help others, and it is generally seen as bad to punish thoughtful attempts to be kind (as that predictably reduces the overall level of kindness in the interactions community members have, which is bad as per the discussion above).
Since it is both true and necessary/useful, meaning it meets the criteria. (It doesn’t seem particularly kind or unkind)
Thinking about it in more detail, it feels like a type/category error to lump “true” and “necessary/useful” (which are qualities having to do with the underlying territory that a statement refers to) in with “kind” (which deals with the particular stance the user writing the statement takes towards intended readers). Since no human has direct API access to undeniable truths and everything we think about must instead be mediated by our imperfect and biased minds, the best we can do is say what we think is true while hoping that it is in fact correct. The strongest argument for nonetheless stating the norm as having to do with “truth” is that it likely invites fewer debates over the intent and the content of the minds of others than something like “honesty” would (and, from the perspective of a moderator of an internet community, norms against mind-reading and ascribing bad attitudes to strangers are crucial for the sustained thriving of the community).
Given your stated opposition to guideline 5 of Duncan Sabien’s “Basics of Rationalist Discourse,” I can already expect a certain subset of these arguments not to be persuasive to you.
I am aware of your historical (and likely also current?) opposition to the development of the rationalist “community,” but your view did not win out.
Somewhat related quote by Scott Alexander: “The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.”
See, generally, Type I vs II errors and the difference between the utility of the expectation and the expectation of the utility.
See also the valley of bad rationality more generally.
I suspect you would likely say this is not okay, for reasons of Entangled Truths, Contagious Lies and because of a general feeling that telling the truth is simply more dignified than optimizing for anything else.
What do you think “good faith” means? I would say that white lies are a prototypical instance of bad faith, defined by Wikipedia as “entertaining or pretending to entertain one set of feelings while acting as if influenced by another.”
Knowingly saying things that are not true is bad, yes. (It might be good sometimes… when communicating with enemies. There’s a reason why the classic thought experiment asks whether you would lie to the Gestapo!) I don’t see any even somewhat plausible justification for doing such things in a discussion on Less Wrong. There is no need to reach for things like “dignity” here… are we really having the “is posting lies on Less Wrong bad, actually? what if it’s good?” discussion?
Seen by whom? I certainly hope that nobody here on LW thinks that it’s bad to punish “thoughtful attempts to be kind” that involve lying. Do you think that?!
In any case, I can’t help but notice that you spend a lot of time discussing the “kind” criterion, approximately one paragraph (and a rather baffling one, at that) on the “true” criterion, but you say almost nothing about the “necessary” criterion. And when you do mention it, you actually shift, unremarked, between several different versions of it:
Right away we’ve got confusion. What’s the criterion—“necessary”, or “useful”? Or either? Or both? These are two very different concepts!
Now we’ve added “helpful” to the mix. This again is different from both “necessary” and “useful”.
And what do any of these things mean, anyhow? What is “necessary”? What is “useful”? Who decides, and how? (Ditto for “kind”, although that one is at least more obviously vague and prone to subjectivity of interpretation, and enough has been said about that already.)
In any case, I’ve written about this problem before. Nothing’s changed since then. “Don’t post falsehoods” is a good rule (with common-sense elaborations like “if it’s demonstrated that something you thought was true is actually false, post a correction”, etc.). This stuff about “kind”, on the other hand, much less “necessary” (or “useful” or any such thing)… Well, let me put it this way: when anyone can give anything like a coherent, consistent, practically applicable definition of any of these things, then I’ll consider whether it might possibly be a good idea to take them as ideals. (To say nothing of elevating them to rule status—which, at least, Less Wrong has not done. Thankfully.)
FWIW, I think of it as a useful mnemonic and more something like “here are three principal components on which I would score a comment before posting to evaluate whether posting it is a good idea”. I think the three hypothesized principal components are decent at capturing important aspects of reality, but not perfect.
I think on LW the “is it true”/”logically valid”/”written with truth-seeking orientation” principal component is more important than on other forums.
I also think de-facto we have settled on the “is it kind”/”written with caring”/”avoiding causing emotional hurt” direction being less important on LW, though this is a topic of great long-standing disagreement between people on the site. To the substantial but still ultimately limited degree I can set norms on the site, I think the answer to this should probably roughly be “yes, I think it makes sense for it to be less important around here, but on the margin I think I would like there to be more kindness, but like, not enormously more”.
I feel like LW has varied a surprising amount on how much it values something like the “useful”/”relevant”/”important”/”weightiness”/”seriousness” principal component. LW is not a humor site, and there isn’t a ton of frivolous posting, but we do have a lot of culture and people do have fun a bunch. My current take is that LW does care about this dimension more than most other internet forums, but a lot less than basically all professional forums or scientific journals or anything in that reference class.
Not sure whether this helps this discussion. I found it helpful to think through this, and figured I would share more of the principal-component framing of this, which is closer to how I think about it. Also, maybe you don’t see how these three things might meaningfully be considered three principal components of comment quality, or don’t see how they carve reality at its joints, which IDK, I am not like enamored with this frame, but I found that it pays decent rent in my ability to predict the consequences of posting a comment.
Part of the problem (not the entirety of it by any means, but a substantial part) is that the claim that these things (‘“useful”/”relevant”/”important”/”weightiness”/”seriousness”’) are somehow one thing is… extremely non-obvious. To me they seem like several different things. I don’t even really know what you mean by any of them (I can make some very general guesses, but who knows if my guesses are even close to what you’ve got in mind?), and I definitely don’t have the first clue how to interpret them as some kind of single thing.
(Really there are so many problems with this whole “kind/true/necessary, pick 2” thing that it seems like if I start listing them, we’ll be here all day. Maybe a big part of the problem is that I’ve never seen it persuasively—or even seriously—defended, and yet it’s routinely cited as if it’s just an uncontroversially good framework. It seems like one of those things that has so much intuitive appeal that most people simply refuse to give it any real thought, no matter how many and how serious are the problems that it is demonstrated to have, because they so strongly don’t want to abandon it. I do not include you among that number, to be clear.)
My guess is I could communicate the concept extensionally by just pointing to lots of examples, but IDK, I don’t super feel like putting in the effort right now. I broadly agree with you that this feels like a framework that is often given too much weight, but I also get decent mileage out of the version I described in my comment.
To be clear, it wasn’t my intention to suggest that I expected you to clarify, right here in this comment thread, this concept (or concepts) that you’re using. I don’t expect that would be a good use of your time (and it seems like you agree).
My point, rather, was that it’s really not clear (to me, and I expect—based on many concrete instances of experience!—to many others, also) what these words are being used to mean, whether this is a single concept or multiple concepts, what exactly those concepts are, etc. Quite apart from the concrete question “what does that mean”, the fact that I don’t have even any good guesses about the answer (much less anything resembling certainty) makes the concept in question a poor basis for any evaluation with practical consequences!
I don’t doubt that, but I do want to point out that a set of criteria applied to oneself, and the same set of criteria applied to others, will have very different consequences.
The thought experiment is fine as a stand-alone dilemma for those who have some particularly powerful deontological commitment to truthtelling, I suppose. Since I do not number myself among them, deciding to lie in that case is straightforwardly the choice I would make. And, of course, lying to enemies is indeed often useful (perhaps even the default option to choose).
But I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option. There have, of course, been a great deal of words spilled on this matter here on LW that ultimately, in my view, veered very far away from the rather straightforward conclusion that when we analyze the expected consequences (which factor into my decisions at least to some extent, and have in any case been part of the ethos of LessWrong for more than 16 years) of our words to others and we realize that the optimal choice is not the one that maximizes our estimation of truth value (or even minimizes it), that is the action we select. Since we are imperfect beings running on corrupted hardware, we can (and likely should) make affordances for the fact that we are likely worse at modeling the world than we would intuitively expect to be, perhaps by going 75% of the way to consequentialism instead of committing ourselves to it fully?
But that merely means adjusting the ultimate computation to take such considerations into account, and definitely does not even come close to meaning you write down your bottom line as “never knowingly say things that are not true to friends or strangers” before actually analyzing the situation in front of you on its own merits.
First as a more meta note, incredulity is an argumentative superweapon and thus makes for a powerful rhetorical tool. I do not think there is anything inherently wrong with using it, of course, but in this particular case I also do not think it is bad to point this out so readers are aware, and adjust their beliefs accordingly.
Now on to the object level, LessWrong is indeed founded on a set of principles that center around truthtelling to such an extent that posting lies is indeed bad here. But the justification for that is not a context-free Fully General Counterargument in favor of maximal truth-seeking, but rather a basic observation that violating the rules of the site will predictably and correctly lead to negative outcomes for everyone involved (from yourself getting banned, to the other users you are responding being rightfully upset about the violation of norms of expected behavior, to the moderators having to waste time getting rid of you, and to second-order consequences for the community in terms of weaking the effect of the norms on constraining other users, which would make future interactions between them more chaotic and overall worse off).
But zooming out for a second, the section of your comment that I quoted is mostly out of scope of my own comment, which explained the general reasons why the particular norms at issue are good for communities. You have focused in on a specific application of those norms which would be unreasonable (and is in any case merely the final paragraph of my comment, which I spent much less time on since it is by far the least applicable to the case of LessWrong). There is no contradiction here: as in many other areas (including law), the specific controls the general, and particular considerations often force general principles to give way. It is yet another illustration of why those principles are built to be flexible in the first place, which accords with my analysis above.
Seen by the users of the site, at a general level, and in particular the overwhelming majority of lurkers that form a crucial part of the base (and influence the content on the site through their votes) at any given time, in no small part because of their potential to become more productive generators of value in the future (of course, that potential only becomes actualized if those users are not driven out beforehand by an aggregate lack of adherence to norms akin to the ones discussed here, as per my explanation in parts 1 and 2 above).
There is a tremendous selection bias involved in only taking into account the opinions of those that comment; while these people may be more useful for the community (and many of them may have been here for a longer time and thus feel that it is their garden too), the incredulity you offer up at the possible existence of disagreement on this matter does not seem to have come to terms with the community’s aggregate opinions on these topics when they came up 1-1.5 years ago (I’d rather not relitigate those matters, as to respect the general community consensus that they are in the past, but I should give at least one illustrative example that forms the prototype for what I was talking about).
I’ll take the rest of your comment slightly out of order, so as to better facilitate the explanation of what’s involved.
This is incorrect as a factual matter. There are 3 crucial moments where the “necessary” criterion came into play in my explanation of the benefits of these norms:
by discouraging other-optimizing (this is mentioned in point 2 above) and other related behaviors as being neither necessary nor useful, even in situations where the user might honestly (and correctly) believe that they are merely “telling it as it is.”
by focusing at least in small part on the impact that the statement has on the readers as opposed to the mere question of how closely it approximates the territory and constrains it (this is the primary focus of point 3 above).
virtually the entirety of point 4.
I would wager that LessWrong understands the concept of “truth” to a far greater extent than virtually any other place on the internet. I did not expand upon this because I had nothing necessary or useful to say that has not already been enshrined as part of the core of the site.
By contrast, what I explain in one of my footnotes is that “true” and “kind,” on their face, do not belong in the same category because they essentially have different conceptual types. Yet, as I wrote above, there is an important reason why the norms are typically stated in the form Davidmanheim chose rather than through the replacement of “honestly” for “true” (which concedingly would make the conceptual types the same).
There does not seem to be a natural boundary that carves reality at the joints in a manner that leaves the different formulations in different boxes, given the particular context at play here (namely, user interactions in an online community). From a literal perspective, it is virtually never “necessary” for a random internet person to tell you anything, unless for some reason you desperately need emergency help and they are the only ones that can supply you with the requisite information for you.
As such, “necessary” and “useful” play virtually the same role in this context. “Helpful” does as well, although it blends in part of (but not the entirety) of the “kindness” rubric that we’ve already gone over.
As mentioned above, in the case of comment sections on LessWrong? They are not very different at all.
Quite to the contrary: the actual important problem at hand has indeed been written about before, about 9 years earlier than all of the links you just gave (except for possibly the last one, where “The topic or board you are looking for appears to be either missing or off limits to you”). The type of conceptual analysis your questions require will not output a short encoding of the relevant category in thingspace in the form of a nice, clean definition because the concepts themselves did not flow out of such a (necessary & sufficient conditions-type) definition to begin with. That would be precisely the same type of type error as the one I have already identified earlier in this comment. They are the result of a changing and flowing communal understanding of ideas that happens organically instead of in a guided or designed manner.
The people who decide are the moderators (not just in terms of having control of the ban-hammer, but also by setting soft rules or general expectations for what is and isn’t acceptable, commendable, upvote-worthy through their communications of what LessWrong is about). And they decide how to set those rules, norms, and expectations based on their own experiences and judgment, while taking into account the continuous feedback that the community members give regarding the quality of content on the site (as per my earlier explaination above in this comment).
Putting aside my previous two paragraphs, it is, of course, good for systems of norms, just like most incentive systems overall, to be clear and legible enough that users end up understanding them and become free to optimize within their framework. But Internet moderation cannot function with the same level of consistency and ease of application (and, consequently, lack of discretion) as something like, say, a legal system, let alone the degree of coherence that you seem to demand in order to even consider whether these norms could possibly be a good idea.
As Habryka explained less than 3 months ago:
There’s a philosophically deep rationale for this, though: to a rational agent, the value of information is nonnegative. (Knowing more shouldn’t make your decisions worse.) It follows that if you’re trying to misinform someone, it must either the case that you want them to make worse decisions (i.e., they’re your enemy), or you think they aren’t rational.
To clarify, I straightforwardly do not believe any human being I have ever come into contact with is rational enough for information-theoretic considerations like that to imply that something other than telling the truth will necessarily lead to them making worse decisions.
The philosophical ideal can still exert normative force even if no humans are spherical Bayesian reasoners on a frictionless plane. The disjunction (“it must either the case that”) is significant: it suggests that if you’re considering lying to someone, you may want to clarify to yourself whether and to what extent that’s because they’re an enemy or because you don’t respect them as an epistemic peer. Even if you end up choosing to lie, it’s with a different rationale and mindset than someone who’s never heard of the normative ideal and just thinks that white lies can be good sometimes.
Yes, this seems correct. With the added clarification that “respecting [someone] as an epistemic peer” is situational rather than a characteristic of the individual in question. It is not that there are people more epistemically advanced than me which I believe I should only ever tell the full truth to, and then people less epistemically advanced than me that I should lie to with absolute impunity whenever I start feeling like it. It depends on a particularized assessment of the moment at hand.
I would suspect that most regular people who tell white lies (for pro-social reasons, at least in their minds) generally do so in cases where they (mostly implicitly and subconsciously) determine that the other person would not react well to the truth, even if they don’t spell out the question in the terms you chose.
Is it the case that if there are two identically-irrational / -boundedly-rational agents, then sharing information between them must have positive value?
Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.
To all that stuff about “consequentialism” and “writing down your bottom line”, I will say only that consequentialism is meaningless without values and preferences. I value honesty. I prefer that others with whom I interact do likewise.
Now, how could you possibly know that?
Given that you didn’t actually make clear that any of those things were about the “necessary” condition (indeed you didn’t even mention the word in any of points #2, #3, or #4 in the grandparent—I did check), I think it’s hardly reasonable to call what I said “incorrect as a factual matter”. But never mind that; you’ve clarified your meaning now, I suppose…
Taking your clarification into account, I will say that I find all three of those points to be some combination of incomprehensible, baffling, and profoundly misguided. (I will elaborate if you like, though the details of these disagreements seem to me to be mostly a tangent.)
Yep. That certain is the problem. And therefore, the conclusion is that this concept is hopelessly confused and not possible to usefully apply in practice—right?
… I guess not.
Look, you certainly can say “I declare that what I mean by ‘necessary’ is ‘helpful’ and/or ‘useful’”. (Of course then you might find yourself having to field the question of whether “helpful” and “useful” are actually the same thing—but never mind that.) The question is, does everyone involved understand this term “necessary” in the same way?
Empirically: no.
Ah, sorry, that’s indeed a restricted forum. Here’s the post, with appropriate redactions:
https://wiki.obormot.net/Temp/QuotedDSLForumPost
Precisely the problem (well, one of the several problems, anyhow) is that there is not, in fact, a “communal understanding” of these concepts, but rather a whole bunch of different understandings—and given the words and concepts we’re talking about here, there’s no hope of it ever being otherwise.
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
As I mentioned, this fits in as a “particularly powerful commitment to truthtelling,” even as in your case it does not seem to be entirely deontological. Certainly a valid preference (in so much as preferences can even be described as “valid”), but orthogonal to the subset of empirical claims relating to how community interactions would likely play out under/without the norms under discussion.
This is one of the rare occasions in which arguing by definition actually works (minus the general issues with modal logics). The lurkers are the members of the community that are active in terms of viewing (and even engaging with in terms of upvoting/downvoting) the content but who do not make their presence known through comments or posts. As per statistics I have seen on the site a few times (admittedly, most of them refer back to the period around ~2017 or so, but the phenomenon being discussed here has only been further accentuated with the more recent influx of users), LW is just like other Internet communities in terms of the vast majority of the userbase being made up of lurkers, most of whom have newer accounts and have engaged with the particularities of the subculture of longer-lasting members to a lesser extent. Instead, they come from different places on the Internet, with different background assumptions that are much more in line with the emphasis on kindness and usefulness (as LessWrong is more of an outlier among such communities in how much it cares about truth per se).
You seem to have confused the point about the fact that one literal meaning of “necessary” is that it refers only to cases of genuine (mostly medical) emergencies. It is not that this word is confused, but rather that, as with many parts of imperfect human languages that are built up on the fly instead of being designed purposefully from the beginning, it has different meanings in different contexts. It is overloaded, so the particular meaning at play depends on context clues (which are, by their nature, mostly implicit rather than explicitly pointed out) such as what the tone of the communication has been thus far, what forum it is taking place in, whether it is a formal or informal setting, whether it deals with topics in a mathematical or engineering or philosophical way, etc (this goes beyond mere Internet discussions and in fact applies to all aspects of human-to-human communication). This doesn’t mean the word is “bad”, as it may be (and often is) still the best option available in our speech to express the ideas we are getting across.
Whether it is “necessary” for someone to point out that some other person is making a particular error (say, for example, claiming that a particular piece of legislation was passed in 1883 instead of the correct answer of 1882) depends on that particular context: if it is a casual conversation between friends in which one is merely explaining to the other that there have been some concrete examples of government-sponsored anti-Chinese legislation, then butting in to correct that is not so “necessary”; by contrast, if this is a PhD student writing their dissertation on this topic, and you are their advisor, it becomes much more “necessary” to point out their error early on.
Certainly not! It would be quite amazing (and frankly rather suspicious) if this were to happen; consider just how many different users there are. The likelihood of every single one of them having the same internal conception of the term is so low as to be negligible. But, of course, it is not necessary to reach this point, nor is it productive to do so, as it requires wasting valuable time and resources to attempt to pin down concepts through short sentences in a manner that simply cannot work out. What matters is whether sufficiently many users understand the ideas well enough to behave according to them in practice; small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).
Yes, you are right to say this, as it seems you do not believe you have the same internal representation of these words as you think other people do (I fully believe that you are genuine in this). But, as mentioned above, the mere existence of a counterexample is not close to sufficient to bring the edifice down, and empirically, your confusions are more revealing about yourself and the way your mind works than they are a signal of underlying problems with the general community understanding of these matters; this would be an instance of improperly Generalizing From One Example.
Yeah, this works.
For the reasons mentioned above, I do not believe this do be correct at an empirical level, just as I agree with the assessments that many of the previous confusions you have previously professed are not about concepts confusing to the vast majority of other people (and, in particular, other users of LW).
“What is allowed by the rules”? What rules? There seems to be a different confusion going on in your paragraph here. What are being discussed are the rules and norms in the community; it’s not that these are allowed or prohibited by something else, but rather they are the principles that allow or prohibit other behaviors by users in the future. The legibility of rules and consistency of moderation are absolutely part of the point because they weigh (to a heavy but not overwhelming extent) on whether such rules and norms are desirable, and as a natural corollary, whether a particular system of such norms is ideal. This is because they impact the empirical outcomes of enacting them, such as whether users will understand them, adhere to them, promote them etc. Ignoring the consequences of that is making the same type of error that has been discussed in this thread in depth.
You then followed this with several empirical claims (one of which was based not on any concrete evidence but just on speculation based on claimed general patterns). No, sorry, arguing by definition does not work here.
Well, words can’t really be confused, can they? But yes, the word “necessary” is overloaded, and some or possibly most or maybe even all of its common usages are also vague enough that people who think they’re talking about the same thing can come up with entirely different operationalizations for it, as I have empirically demonstrated, multiple times.
That is not the problem. The problem (or rather—again—one of multiple problems) is that even given a fixed context, people easily can, and do, disagree not only on what “necessary” means in that context, but also on whether any given utterance fits that criterion (on which they disagree), and even on the purpose of having the criterion in the first place. This, again, is given some context.
It’s pretty clear that they don’t. I mean, we’ve had how many years of SSC, and now ACX (and, to a lesser extent, also DSL and Less Wrong) to demonstrate this?
Sorry, no, this doesn’t work. I linked multiple examples of other people disagreeing on what “necessary” means in any given context, including Scott Alexander, who came up with this criterion in the first place, disagreeing with himself about it, in the course of the original post that laid out the concept of moderating according to this criterion!
Uh… I think you should maybe reread what I’m responding to, there.
Yes, people end up with different representations of the same kinds of ideas when you force them to spell out how they conceptualize a certain word when that word does not have a simple representation in natural language. It is the same phenomenon that prompted Critch to label “consciousness” as a conflationary alliance (and Paradiddle accurately identified the underlying issue that caused this seeming disparity in understanding).
In any case, as I have mentioned above, differences in opinion are not only inevitable, but also perfectly fine as long as they do not grow too large or too common. From my comment above: “small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).”
LessWrong and SSC (while it was running) have maintained arguably the highest aggregate qualities of commentary on the Internet, at least among relatively large communities that I know about. I do not see what is being demonstrated, other than precisely the opposite of the notion that norms sketched out in natural language cannot be followed en masse or cannot generate the type of culture that promotes such norms to new users (or users just transitioning from being lurkers to generating content of their own).
I think I’m going to tap out now. Unfortunately, I believe we have exhausted the vast majority of useful avenues of discourse on this matter.
But these aren’t small disagreements. They’re big disagreements.
The SSC case demonstrates precisely that these particular norms, at least, which were sketched out in natural language, cannot be followed en masse or, really, at all, and that (a) trying to enforce them leads to endless arguments (usually started by someone responding to someone else’s comment by demanding to know whether it was kind or necessary—note that the “true” criterion never led to such acrimony!), and (b) the actual result is a steady degradation of comment (and commenter) quality.
The Less Wrong case demonstrates that the whole criterion is unnecessary. (Discussions like this one also demonstrate some other things, but those are secondary.)