Knowingly saying things that are not true is bad, yes. (It might be good sometimes… when communicating with enemies. There’s a reason why the classic thought experiment asks whether you would lie to the Gestapo!)
The thought experiment is fine as a stand-alone dilemma for those who have some particularly powerful deontological commitment to truthtelling, I suppose. Since I do not number myself among them, deciding to lie in that case is straightforwardly the choice I would make. And, of course, lying to enemies is indeed often useful (perhaps even the default option to choose).
But I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option. There have, of course, been a great deal of words spilled on this matter here on LW that ultimately, in my view, veered very far away from the rather straightforward conclusion that when we analyze the expected consequences (which factor into my decisions at least to some extent, and have in any case been part of the ethos of LessWrong for more than 16 years) of our words to others and we realize that the optimal choice is not the one that maximizes our estimation of truth value (or even minimizes it), that is the action we select. Since we are imperfect beings running on corrupted hardware, we can (and likely should) make affordances for the fact that we are likely worse at modeling the world than we would intuitively expect to be, perhaps by going 75% of the way to consequentialism instead of committing ourselves to it fully?
But that merely means adjusting the ultimate computation to take such considerations into account, and definitely does not even come close to meaning you write down your bottom line as “never knowingly say things that are not true to friends or strangers” before actually analyzing the situation in front of you on its own merits.
I don’t see any even somewhat plausible justification for doing such things in a discussion on Less Wrong. There is no need to reach for things like “dignity” here… are we really having the “is posting lies on Less Wrong bad, actually? what if it’s good?” discussion?
First as a more meta note, incredulity is an argumentative superweapon and thus makes for a powerful rhetorical tool. I do not think there is anything inherently wrong with using it, of course, but in this particular case I also do not think it is bad to point this out so readers are aware, and adjust their beliefs accordingly.
Now on to the object level, LessWrong is indeed founded on a set of principles that center around truthtelling to such an extent that posting lies is indeed bad here. But the justification for that is not a context-freeFully General Counterargument in favor of maximal truth-seeking, but rather a basic observation that violating the rules of the site will predictably and correctly lead to negative outcomes for everyone involved (from yourself getting banned, to the other users you are responding being rightfully upset about the violation of norms of expected behavior, to the moderators having to waste time getting rid of you, and to second-order consequences for the community in terms of weaking the effect of the norms on constraining other users, which would make future interactions between them more chaotic and overall worse off).
But zooming out for a second, the section of your comment that I quoted is mostly out of scope of my own comment, which explained the general reasons why the particular norms at issue are good for communities. You have focused in on a specific application of those norms which would be unreasonable (and is in any case merely the final paragraph of my comment, which I spent much less time on since it is by far the least applicable to the case of LessWrong). There is no contradiction here: as in many other areas (including law), the specific controls the general, and particular considerations often force general principles to give way. It is yet another illustration of why those principles are built to be flexible in the first place, which accords with my analysis above.
Seen by whom?
Seen by the users of the site, at a general level, and in particular the overwhelming majority of lurkers that form a crucial part of the base (and influence the content on the site through their votes) at any given time, in no small part because of their potential to become more productive generators of value in the future (of course, that potential only becomes actualized if those users are not driven out beforehand by an aggregate lack of adherence to norms akin to the ones discussed here, as per my explanation in parts 1 and 2 above).
There is a tremendous selection bias involved in only taking into account the opinions of those that comment; while these people may be more useful for the community (and many of them may have been here for a longer time and thus feel that it is their garden too), the incredulity you offer up at the possible existence of disagreement on this matter does not seem to have come to terms with the community’s aggregate opinions on these topics when they came up 1-1.5 years ago (I’d rather not relitigate those matters, as to respect the general community consensus that they are in the past, but I should give at least one illustrative example that forms the prototype for what I was talking about).
I’ll take the rest of your comment slightly out of order, so as to better facilitate the explanation of what’s involved.
you say almost nothing about the “necessary” criterion
This is incorrect as a factual matter. There are 3 crucial moments where the “necessary” criterion came into play in my explanation of the benefits of these norms:
by discouraging other-optimizing (this is mentioned in point 2 above) and other related behaviors as being neither necessary nor useful, even in situations where the user might honestly (and correctly) believe that they are merely “telling it as it is.”
approximately one paragraph (and a rather baffling one, at that) on the “true” criterion
I would wager that LessWrong understands the concept of “truth” to a far greater extent than virtually any other place on the internet. I did not expand upon this because I had nothing necessary or useful to say that has not already been enshrined as part of the core of the site.
By contrast, what I explain in one of my footnotes is that “true” and “kind,” on their face, do not belong in the same category because they essentially have different conceptual types. Yet, as I wrote above, there is an important reason why the norms are typically stated in the form Davidmanheim chose rather than through the replacement of “honestly” for “true” (which concedingly would make the conceptual types the same).
And when you do mention it, you actually shift, unremarked, between several different versions of it
There does not seem to be a natural boundary that carves reality at the joints in a manner that leaves the different formulations in different boxes, given the particular context at play here (namely, user interactions in an online community). From a literal perspective, it is virtually never “necessary” for a random internet person to tell you anything, unless for some reason you desperately need emergency help and they are the only ones that can supply you with the requisite information for you.
As such, “necessary” and “useful” play virtually the same role in this context. “Helpful” does as well, although it blends in part of (but not the entirety) of the “kindness” rubric that we’ve already gone over.
These are two very different concepts!
As mentioned above, in the case of comment sections on LessWrong? They are not very different at all.
And what do any of these things mean, anyhow? What is “necessary”? What is “useful”? Who decides, and how? (Ditto for “kind”, although that one is at least more obviously vague and prone to subjectivity of interpretation, and enough has been said about that already.)
Quite to the contrary: the actual important problem at hand has indeed been written about before, about 9 years earlier than all of the links you just gave (except for possibly the last one, where “The topic or board you are looking for appears to be either missing or off limits to you”). The type of conceptual analysis your questions require will not output a short encoding of the relevant category in thingspace in the form of a nice, clean definition because the concepts themselves did not flow out of such a (necessary & sufficient conditions-type) definition to begin with. That would be precisely the same type of type error as the one I have already identified earlier in this comment.They are the result of a changing and flowing communal understanding of ideas that happens organically instead of in a guided or designed manner.
The people who decide are the moderators (not just in terms of having control of the ban-hammer, but also by setting soft rules or general expectations for what is and isn’t acceptable, commendable, upvote-worthy through their communications of what LessWrong is about). And they decide how to set those rules, norms, and expectations based on their own experiences and judgment, while taking into account the continuous feedback that the community members give regarding the quality of content on the site (as per my earlier explaination above in this comment).
Well, let me put it this way: when anyone can give anything like a coherent, consistent, practically applicable definition of any of these things, then I’ll consider whether it might possibly be a good idea to take them as ideals.
Putting aside my previous two paragraphs, it is, of course, good for systems of norms, just like most incentive systems overall, to be clear and legible enough that users end up understanding them and become free to optimize within their framework. But Internet moderation cannot function with the same level of consistency and ease of application (and, consequently, lack of discretion) as something like, say, a legal system, let alone the degree of coherence that you seem to demand in order to even consider whether these norms could possibly be a good idea.
It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible.
As such, we don’t have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes.
I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is.
I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option
There’s a philosophically deep rationale for this, though: to a rational agent, the value of information is nonnegative. (Knowing more shouldn’t make your decisions worse.) It follows that if you’re trying to misinform someone, it must either the case that you want them to make worse decisions (i.e., they’re your enemy), or you think they aren’t rational.
To clarify, I straightforwardly do not believe any human being I have ever come into contact with is rational enough for information-theoretic considerations like that to imply that something other than telling the truth will necessarily lead to them making worse decisions.
The philosophical ideal can still exert normative force even if no humans are spherical Bayesian reasoners on a frictionless plane. The disjunction (“it must either the case that”) is significant: it suggests that if you’re considering lying to someone, you may want to clarify to yourself whether and to what extent that’s because they’re an enemy or because you don’t respect them as an epistemic peer. Even if you end up choosing to lie, it’s with a different rationale and mindset than someone who’s never heard of the normative ideal and just thinks that white lies can be good sometimes.
Yes, this seems correct. With the added clarification that “respecting [someone] as an epistemic peer” is situational rather than a characteristic of the individual in question. It is not that there are people more epistemically advanced than me which I believe I should only ever tell the full truth to, and then people less epistemically advanced than me that I should lie to with absolute impunity whenever I start feeling like it. It depends on a particularized assessment of the moment at hand.
I would suspect that most regular people who tell white lies (for pro-social reasons, at least in their minds) generally do so in cases where they (mostly implicitly and subconsciously) determine that the other person would not react well to the truth, even if they don’t spell out the question in the terms you chose.
Is it the case that if there are two identically-irrational / -boundedly-rational agents, then sharing information between them must have positive value?
Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.
To all that stuff about “consequentialism” and “writing down your bottom line”, I will say only that consequentialism is meaningless without values and preferences. I value honesty. I prefer that others with whom I interact do likewise.
Seen by whom?
Seen by the users of the site, at a general level, and in particular the overwhelming majority of lurkers
Now, how could you possibly know that?
you say almost nothing about the “necessary” criterion
This is incorrect as a factual matter. There are 3 crucial moments where the “necessary” criterion came into play in my explanation of the benefits of these norms:
Given that you didn’t actually make clear that any of those things were about the “necessary” condition (indeed you didn’t even mention the word in any of points #2, #3, or #4 in the grandparent—I did check), I think it’s hardly reasonable to call what I said “incorrect as a factual matter”. But never mind that; you’ve clarified your meaning now, I suppose…
Taking your clarification into account, I will say that I find all three of those points to be some combination of incomprehensible, baffling, and profoundly misguided. (I will elaborate if you like, though the details of these disagreements seem to me to be mostly a tangent.)
And when you do mention it, you actually shift, unremarked, between several different versions of it
There does not seem to be a natural boundary that carves reality at the joints in a manner that leaves the different formulations in different boxes, *given the particular context at play here (namely, user interactions in an online community). *From a literal perspective, it is virtually never “necessary” for a random internet person to tell you anything, unless for some reason you desperately need emergency help and they are the only ones that can supply you with the requisite information for you.
Yep. That certain is the problem. And therefore, the conclusion is that this concept is hopelessly confused and not possible to usefully apply in practice—right?
As such, “necessary” and “useful” play virtually the same role in this context. “Helpful” does as well, although it blends in part of (but not the entirety) of the “kindness” rubric that we’ve already gone over.
… I guess not.
Look, you certainly can say “I declare that what I mean by ‘necessary’ is ‘helpful’ and/or ‘useful’”. (Of course then you might find yourself having to field the question of whether “helpful” and “useful” are actually the same thing—but never mind that.) The question is, does everyone involved understand this term “necessary” in the same way?
Empirically: no.
except for possibly the last one, where “The topic or board you are looking for appears to be either missing or off limits to you”
Ah, sorry, that’s indeed a restricted forum. Here’s the post, with appropriate redactions:
They are the result of a changing and flowing communal understanding of ideas that happens organically instead of in a guided or designed manner.
Precisely the problem (well, one of the several problems, anyhow) is that there is not, in fact, a “communal understanding” of these concepts, but rather a whole bunch of different understandings—and given the words and concepts we’re talking about here, there’s no hope of it ever being otherwise.
Internet moderation
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
To all that stuff about “consequentialism” and “writing down your bottom line”, I will say only that consequentialism is meaningless without values and preferences. I value honesty. I prefer that others with whom I interact do likewise.
As I mentioned, this fits in as a “particularly powerful commitment to truthtelling,” even as in your case it does not seem to be entirely deontological. Certainly a valid preference (in so much as preferences can even be described as “valid”), but orthogonal to the subset of empirical claims relating to how community interactions would likely play out under/without the norms under discussion.
Now, how could you possibly know that?
This is one of the rare occasions in which arguing by definition actually works (minus the general issues with modal logics). The lurkers are the members of the community that are active in terms of viewing (and even engaging with in terms of upvoting/downvoting) the content but who do not make their presence known through comments or posts. As per statistics I have seen on the site a few times (admittedly, most of them refer back to the period around ~2017 or so, but the phenomenon being discussed here has only been further accentuated with the more recent influx of users), LW is just like other Internet communities in terms of the vast majority of the userbase being made up of lurkers, most of whom have newer accounts and have engaged with the particularities of the subculture of longer-lasting members to a lesser extent. Instead, they come from different places on the Internet, with different background assumptions that are much more in line with the emphasis on kindness and usefulness (as LessWrong is more of an outlier among such communities in how much it cares about truth per se).
Yep. That certain is the problem. And therefore, the conclusion is that this concept is hopelessly confused and not possible to usefully apply in practice—right?
You seem to have confused the point about the fact that one literal meaning of “necessary” is that it refers only to cases of genuine (mostly medical) emergencies. It is not that this word is confused, but rather that, as with many parts of imperfect human languages that are built up on the fly instead of being designed purposefully from the beginning, it has different meanings in different contexts. It is overloaded, so the particular meaning at play depends on context clues (which are, by their nature, mostly implicit rather than explicitly pointed out) such as what the tone of the communication has been thus far, what forum it is taking place in, whether it is a formal or informal setting, whether it deals with topics in a mathematical or engineering or philosophical way, etc (this goes beyond mere Internet discussions and in fact applies to all aspects of human-to-human communication). This doesn’t mean the word is “bad”, as it may be (and often is) still the best option available in our speech to express the ideas we are getting across.
Whether it is “necessary” for someone to point out that some other person is making a particular error (say, for example, claiming that a particular piece of legislation was passed in 1883 instead of the correct answer of 1882) depends on that particular context: if it is a casual conversation between friends in which one is merely explaining to the other that there have been some concrete examples of government-sponsored anti-Chinese legislation, then butting in to correct that is not so “necessary”; by contrast, if this is a PhD student writing their dissertation on this topic, and you are their advisor, it becomes much more “necessary” to point out their error early on.
The question is, does everyone involved understand this term “necessary” in the same way?
Certainly not! It would be quite amazing (and frankly rather suspicious) if this were to happen; consider just how many different users there are. The likelihood of every single one of them having the same internal conception of the term is so low as to be negligible. But, of course, it is not necessary to reach this point, nor is it productive to do so, as it requires wasting valuable time and resources to attempt to pin down concepts through short sentences in a manner that simply cannot work out. What matters is whether sufficiently many users understand the ideas well enough to behave according to them in practice; small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).
Empirically: no.
Yes, you are right to say this, as it seems you do not believe you have the same internal representation of these words as you think other people do (I fully believe that you are genuine in this). But, as mentioned above, the mere existence of a counterexample is not close to sufficient to bring the edifice down, and empirically, your confusions are more revealing about yourself and the way your mind works than they are a signal of underlying problems with the general community understanding of these matters; this would be an instance of improperly Generalizing From One Example.
Ah, sorry, that’s indeed a restricted forum. Here’s the post, with appropriate redactions:
Precisely the problem (well, one of the several problems, anyhow) is that there is not, in fact, a “communal understanding” of these concepts, but rather a whole bunch of different understandings—and given the words and concepts we’re talking about here, there’s no hope of it ever being otherwise.
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
“What is allowed by the rules”? What rules? There seems to be a different confusion going on in your paragraph here. What are being discussed are the rules and norms in the community; it’s not that these are allowed or prohibited by something else, but rather they are the principles that allow or prohibit other behaviors by users in the future. The legibility of rules and consistency of moderation are absolutely part of the point because they weigh (to a heavy but not overwhelming extent) on whether such rules and norms are desirable, and as a natural corollary, whether a particular system of such norms is ideal. This is because they impact the empirical outcomes of enacting them, such as whether users will understand them, adhere to them, promote them etc. Ignoring the consequences of that is making the same type of error that has been discussed in this thread in depth.
This is one of the rare occasions in which arguing by definition actually works
You then followed this with several empirical claims (one of which was based not on any concrete evidence but just on speculation based on claimed general patterns). No, sorry, arguing by definition does not work here.
It is not that this word is confused … It is overloaded
Well, words can’t really be confused, can they? But yes, the word “necessary” is overloaded, and some or possibly most or maybe even all of its common usages are also vague enough that people who think they’re talking about the same thing can come up with entirely different operationalizations for it, as I have empirically demonstrated, multiple times.
Whether it is “necessary” for someone to point out that some other person is making a particular error … depends on that particular context
That is not the problem. The problem (or rather—again—one of multiple problems) is that even given a fixed context, people easily can, and do, disagree not only on what “necessary” means in that context, but also on whether any given utterance fits that criterion (on which they disagree), and even on the purpose of having the criterion in the first place. This, again, is given some context.
What matters is whether sufficiently many users understand the ideas well enough to behave according to them in practice
It’s pretty clear that they don’t. I mean, we’ve had how many years of SSC, and now ACX (and, to a lesser extent, also DSL and Less Wrong) to demonstrate this?
But, as mentioned above, the mere existence of a counterexample is not close to sufficient to bring the edifice down, and empirically, your confusions are more revealing about yourself and the way your mind works than they are a signal of underlying problems with the general community understanding of these matters; this would be an instance of improperly Generalizing From One Example.
Sorry, no, this doesn’t work. I linked multiple examples of other people disagreeing on what “necessary” means in any given context, including Scott Alexander, who came up with this criterion in the first place, disagreeing with himself about it, in the course of the original post that laid out the concept of moderating according to this criterion!
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
“What is allowed by the rules”? What rules? There seems to be a different confusion going on in your paragraph here.
Uh… I think you should maybe reread what I’m responding to, there.
people who think they’re talking about the same thing can come up with entirely different operationalizations for it, as I have empirically demonstrated, multiple times
In any case, as I have mentioned above, differences in opinion are not only inevitable, but also perfectly fine as long as they do not grow too large or too common. From my comment above: “small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).”
It’s pretty clear that they don’t. I mean, we’ve had how many years of SSC, and now ACX (and, to a lesser extent, also DSL and Less Wrong) to demonstrate this?
LessWrong and SSC (while it was running) have maintained arguably the highest aggregate qualities of commentary on the Internet, at least among relatively large communities that I know about. I do not see what is being demonstrated, other than precisely the opposite of the notion that norms sketched out in natural language cannot be followed en masse or cannot generate the type of culture that promotes such norms to new users (or users just transitioning from being lurkers to generating content of their own).
Uh… I think you should maybe reread what I’m responding to, there.
I think I’m going to tap out now. Unfortunately, I believe we have exhausted the vast majority of useful avenues of discourse on this matter.
From my comment above: “small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).”
But these aren’t small disagreements. They’re big disagreements.
I do not see what is being demonstrated, other than precisely the opposite of the notion that norms sketched out in natural language cannot be followed *en masse *or cannot generate the type of culture that promotes such norms to new users (or users just transitioning from being lurkers to generating content of their own).
The SSC case demonstrates precisely that these particular norms, at least, which were sketched out in natural language, cannot be followed en masse or, really, at all, and that (a) trying to enforce them leads to endless arguments (usually started by someone responding to someone else’s comment by demanding to know whether it was kind or necessary—note that the “true” criterion never led to such acrimony!), and (b) the actual result is a steady degradation of comment (and commenter) quality.
The Less Wrong case demonstrates that the whole criterion is unnecessary. (Discussions like this one also demonstrate some other things, but those are secondary.)
The thought experiment is fine as a stand-alone dilemma for those who have some particularly powerful deontological commitment to truthtelling, I suppose. Since I do not number myself among them, deciding to lie in that case is straightforwardly the choice I would make. And, of course, lying to enemies is indeed often useful (perhaps even the default option to choose).
But I definitely do not agree with the (implied) notion that it is only when dealing with enemies that knowingly saying things that are not true is the correct option. There have, of course, been a great deal of words spilled on this matter here on LW that ultimately, in my view, veered very far away from the rather straightforward conclusion that when we analyze the expected consequences (which factor into my decisions at least to some extent, and have in any case been part of the ethos of LessWrong for more than 16 years) of our words to others and we realize that the optimal choice is not the one that maximizes our estimation of truth value (or even minimizes it), that is the action we select. Since we are imperfect beings running on corrupted hardware, we can (and likely should) make affordances for the fact that we are likely worse at modeling the world than we would intuitively expect to be, perhaps by going 75% of the way to consequentialism instead of committing ourselves to it fully?
But that merely means adjusting the ultimate computation to take such considerations into account, and definitely does not even come close to meaning you write down your bottom line as “never knowingly say things that are not true to friends or strangers” before actually analyzing the situation in front of you on its own merits.
First as a more meta note, incredulity is an argumentative superweapon and thus makes for a powerful rhetorical tool. I do not think there is anything inherently wrong with using it, of course, but in this particular case I also do not think it is bad to point this out so readers are aware, and adjust their beliefs accordingly.
Now on to the object level, LessWrong is indeed founded on a set of principles that center around truthtelling to such an extent that posting lies is indeed bad here. But the justification for that is not a context-free Fully General Counterargument in favor of maximal truth-seeking, but rather a basic observation that violating the rules of the site will predictably and correctly lead to negative outcomes for everyone involved (from yourself getting banned, to the other users you are responding being rightfully upset about the violation of norms of expected behavior, to the moderators having to waste time getting rid of you, and to second-order consequences for the community in terms of weaking the effect of the norms on constraining other users, which would make future interactions between them more chaotic and overall worse off).
But zooming out for a second, the section of your comment that I quoted is mostly out of scope of my own comment, which explained the general reasons why the particular norms at issue are good for communities. You have focused in on a specific application of those norms which would be unreasonable (and is in any case merely the final paragraph of my comment, which I spent much less time on since it is by far the least applicable to the case of LessWrong). There is no contradiction here: as in many other areas (including law), the specific controls the general, and particular considerations often force general principles to give way. It is yet another illustration of why those principles are built to be flexible in the first place, which accords with my analysis above.
Seen by the users of the site, at a general level, and in particular the overwhelming majority of lurkers that form a crucial part of the base (and influence the content on the site through their votes) at any given time, in no small part because of their potential to become more productive generators of value in the future (of course, that potential only becomes actualized if those users are not driven out beforehand by an aggregate lack of adherence to norms akin to the ones discussed here, as per my explanation in parts 1 and 2 above).
There is a tremendous selection bias involved in only taking into account the opinions of those that comment; while these people may be more useful for the community (and many of them may have been here for a longer time and thus feel that it is their garden too), the incredulity you offer up at the possible existence of disagreement on this matter does not seem to have come to terms with the community’s aggregate opinions on these topics when they came up 1-1.5 years ago (I’d rather not relitigate those matters, as to respect the general community consensus that they are in the past, but I should give at least one illustrative example that forms the prototype for what I was talking about).
I’ll take the rest of your comment slightly out of order, so as to better facilitate the explanation of what’s involved.
This is incorrect as a factual matter. There are 3 crucial moments where the “necessary” criterion came into play in my explanation of the benefits of these norms:
by discouraging other-optimizing (this is mentioned in point 2 above) and other related behaviors as being neither necessary nor useful, even in situations where the user might honestly (and correctly) believe that they are merely “telling it as it is.”
by focusing at least in small part on the impact that the statement has on the readers as opposed to the mere question of how closely it approximates the territory and constrains it (this is the primary focus of point 3 above).
virtually the entirety of point 4.
I would wager that LessWrong understands the concept of “truth” to a far greater extent than virtually any other place on the internet. I did not expand upon this because I had nothing necessary or useful to say that has not already been enshrined as part of the core of the site.
By contrast, what I explain in one of my footnotes is that “true” and “kind,” on their face, do not belong in the same category because they essentially have different conceptual types. Yet, as I wrote above, there is an important reason why the norms are typically stated in the form Davidmanheim chose rather than through the replacement of “honestly” for “true” (which concedingly would make the conceptual types the same).
There does not seem to be a natural boundary that carves reality at the joints in a manner that leaves the different formulations in different boxes, given the particular context at play here (namely, user interactions in an online community). From a literal perspective, it is virtually never “necessary” for a random internet person to tell you anything, unless for some reason you desperately need emergency help and they are the only ones that can supply you with the requisite information for you.
As such, “necessary” and “useful” play virtually the same role in this context. “Helpful” does as well, although it blends in part of (but not the entirety) of the “kindness” rubric that we’ve already gone over.
As mentioned above, in the case of comment sections on LessWrong? They are not very different at all.
Quite to the contrary: the actual important problem at hand has indeed been written about before, about 9 years earlier than all of the links you just gave (except for possibly the last one, where “The topic or board you are looking for appears to be either missing or off limits to you”). The type of conceptual analysis your questions require will not output a short encoding of the relevant category in thingspace in the form of a nice, clean definition because the concepts themselves did not flow out of such a (necessary & sufficient conditions-type) definition to begin with. That would be precisely the same type of type error as the one I have already identified earlier in this comment. They are the result of a changing and flowing communal understanding of ideas that happens organically instead of in a guided or designed manner.
The people who decide are the moderators (not just in terms of having control of the ban-hammer, but also by setting soft rules or general expectations for what is and isn’t acceptable, commendable, upvote-worthy through their communications of what LessWrong is about). And they decide how to set those rules, norms, and expectations based on their own experiences and judgment, while taking into account the continuous feedback that the community members give regarding the quality of content on the site (as per my earlier explaination above in this comment).
Putting aside my previous two paragraphs, it is, of course, good for systems of norms, just like most incentive systems overall, to be clear and legible enough that users end up understanding them and become free to optimize within their framework. But Internet moderation cannot function with the same level of consistency and ease of application (and, consequently, lack of discretion) as something like, say, a legal system, let alone the degree of coherence that you seem to demand in order to even consider whether these norms could possibly be a good idea.
As Habryka explained less than 3 months ago:
There’s a philosophically deep rationale for this, though: to a rational agent, the value of information is nonnegative. (Knowing more shouldn’t make your decisions worse.) It follows that if you’re trying to misinform someone, it must either the case that you want them to make worse decisions (i.e., they’re your enemy), or you think they aren’t rational.
To clarify, I straightforwardly do not believe any human being I have ever come into contact with is rational enough for information-theoretic considerations like that to imply that something other than telling the truth will necessarily lead to them making worse decisions.
The philosophical ideal can still exert normative force even if no humans are spherical Bayesian reasoners on a frictionless plane. The disjunction (“it must either the case that”) is significant: it suggests that if you’re considering lying to someone, you may want to clarify to yourself whether and to what extent that’s because they’re an enemy or because you don’t respect them as an epistemic peer. Even if you end up choosing to lie, it’s with a different rationale and mindset than someone who’s never heard of the normative ideal and just thinks that white lies can be good sometimes.
Yes, this seems correct. With the added clarification that “respecting [someone] as an epistemic peer” is situational rather than a characteristic of the individual in question. It is not that there are people more epistemically advanced than me which I believe I should only ever tell the full truth to, and then people less epistemically advanced than me that I should lie to with absolute impunity whenever I start feeling like it. It depends on a particularized assessment of the moment at hand.
I would suspect that most regular people who tell white lies (for pro-social reasons, at least in their minds) generally do so in cases where they (mostly implicitly and subconsciously) determine that the other person would not react well to the truth, even if they don’t spell out the question in the terms you chose.
Is it the case that if there are two identically-irrational / -boundedly-rational agents, then sharing information between them must have positive value?
Over what is “necessarily” quantified, here? Do you mean:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where they make any decision”?
or:
“… to imply that something other than telling the truth will necessarily lead to them making worse decisions, in every case where I tell them something other than the truth”?
or:
“… to imply that it is necessarily the case that a policy other than telling them the truth will, in expectation, lead to them making worse decisions on average”?
or something else?
I ask because under the first two interpretations, for example, the claim is true even when dealing with perfectly rational agents. But the third claim seems highly questionable if applied to literally all people whom you have met.
I believe that S=∅, where S={humans which satisfy T} and T = “in every single situation where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.”
This is compatible with S′≠∅, where S′={humans which satisfy T'} and T’ = “in the vast majority of situations where someone can either tell them the truth or can instead tell them something false or not fully optimized for truth, they make better decisions when given the truth.” In fact, I believe S’ to be a very large set.
It is also compatible with S′≠∅, where S′′={humans which satisfy T''} and T″ = “a general policy of always telling them the truth will, on average (or in expectation over single events), result in them making better decisions than a general policy of never telling them the truth.” Indeed, I believe S″ to be a very large set as well.
Right, so, that first one is also true of perfectly rational agents, so tells us nothing interesting. (Unless you quantify “they make better decisions when given the truth” as “in expectation” or “on average”, rather than “as events actually turn out”—but in that case I once again doubt the claim as it applies to people you’ve met.)
Yes, in expectation over how the events can turn out given the (very rough and approximate) probability distributions they have in their minds at the time of/right after receiving the information. For every single person I know, I believe there are some situations where, were I to give them the truth, I would predict (at the time of giving them the information, not post hoc) that they will perform worse than if I had told them something other than the truth.
This is why I said “make better decisions” instead of merely “obtain better outcomes,” since the latter would lend itself more naturally to the interpretation of “as things actually turn out,” while the decision is evaluated on the basis of what was known at the time and not through what happened to occur.
To all that stuff about “consequentialism” and “writing down your bottom line”, I will say only that consequentialism is meaningless without values and preferences. I value honesty. I prefer that others with whom I interact do likewise.
Now, how could you possibly know that?
Given that you didn’t actually make clear that any of those things were about the “necessary” condition (indeed you didn’t even mention the word in any of points #2, #3, or #4 in the grandparent—I did check), I think it’s hardly reasonable to call what I said “incorrect as a factual matter”. But never mind that; you’ve clarified your meaning now, I suppose…
Taking your clarification into account, I will say that I find all three of those points to be some combination of incomprehensible, baffling, and profoundly misguided. (I will elaborate if you like, though the details of these disagreements seem to me to be mostly a tangent.)
Yep. That certain is the problem. And therefore, the conclusion is that this concept is hopelessly confused and not possible to usefully apply in practice—right?
… I guess not.
Look, you certainly can say “I declare that what I mean by ‘necessary’ is ‘helpful’ and/or ‘useful’”. (Of course then you might find yourself having to field the question of whether “helpful” and “useful” are actually the same thing—but never mind that.) The question is, does everyone involved understand this term “necessary” in the same way?
Empirically: no.
Ah, sorry, that’s indeed a restricted forum. Here’s the post, with appropriate redactions:
https://wiki.obormot.net/Temp/QuotedDSLForumPost
Precisely the problem (well, one of the several problems, anyhow) is that there is not, in fact, a “communal understanding” of these concepts, but rather a whole bunch of different understandings—and given the words and concepts we’re talking about here, there’s no hope of it ever being otherwise.
This discussion started out with a question about what is ideal, not what is allowed by the rules, or any such thing, so legibility of rules and consistency of moderation—important as those things might be—are not the point here.
As I mentioned, this fits in as a “particularly powerful commitment to truthtelling,” even as in your case it does not seem to be entirely deontological. Certainly a valid preference (in so much as preferences can even be described as “valid”), but orthogonal to the subset of empirical claims relating to how community interactions would likely play out under/without the norms under discussion.
This is one of the rare occasions in which arguing by definition actually works (minus the general issues with modal logics). The lurkers are the members of the community that are active in terms of viewing (and even engaging with in terms of upvoting/downvoting) the content but who do not make their presence known through comments or posts. As per statistics I have seen on the site a few times (admittedly, most of them refer back to the period around ~2017 or so, but the phenomenon being discussed here has only been further accentuated with the more recent influx of users), LW is just like other Internet communities in terms of the vast majority of the userbase being made up of lurkers, most of whom have newer accounts and have engaged with the particularities of the subculture of longer-lasting members to a lesser extent. Instead, they come from different places on the Internet, with different background assumptions that are much more in line with the emphasis on kindness and usefulness (as LessWrong is more of an outlier among such communities in how much it cares about truth per se).
You seem to have confused the point about the fact that one literal meaning of “necessary” is that it refers only to cases of genuine (mostly medical) emergencies. It is not that this word is confused, but rather that, as with many parts of imperfect human languages that are built up on the fly instead of being designed purposefully from the beginning, it has different meanings in different contexts. It is overloaded, so the particular meaning at play depends on context clues (which are, by their nature, mostly implicit rather than explicitly pointed out) such as what the tone of the communication has been thus far, what forum it is taking place in, whether it is a formal or informal setting, whether it deals with topics in a mathematical or engineering or philosophical way, etc (this goes beyond mere Internet discussions and in fact applies to all aspects of human-to-human communication). This doesn’t mean the word is “bad”, as it may be (and often is) still the best option available in our speech to express the ideas we are getting across.
Whether it is “necessary” for someone to point out that some other person is making a particular error (say, for example, claiming that a particular piece of legislation was passed in 1883 instead of the correct answer of 1882) depends on that particular context: if it is a casual conversation between friends in which one is merely explaining to the other that there have been some concrete examples of government-sponsored anti-Chinese legislation, then butting in to correct that is not so “necessary”; by contrast, if this is a PhD student writing their dissertation on this topic, and you are their advisor, it becomes much more “necessary” to point out their error early on.
Certainly not! It would be quite amazing (and frankly rather suspicious) if this were to happen; consider just how many different users there are. The likelihood of every single one of them having the same internal conception of the term is so low as to be negligible. But, of course, it is not necessary to reach this point, nor is it productive to do so, as it requires wasting valuable time and resources to attempt to pin down concepts through short sentences in a manner that simply cannot work out. What matters is whether sufficiently many users understand the ideas well enough to behave according to them in practice; small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).
Yes, you are right to say this, as it seems you do not believe you have the same internal representation of these words as you think other people do (I fully believe that you are genuine in this). But, as mentioned above, the mere existence of a counterexample is not close to sufficient to bring the edifice down, and empirically, your confusions are more revealing about yourself and the way your mind works than they are a signal of underlying problems with the general community understanding of these matters; this would be an instance of improperly Generalizing From One Example.
Yeah, this works.
For the reasons mentioned above, I do not believe this do be correct at an empirical level, just as I agree with the assessments that many of the previous confusions you have previously professed are not about concepts confusing to the vast majority of other people (and, in particular, other users of LW).
“What is allowed by the rules”? What rules? There seems to be a different confusion going on in your paragraph here. What are being discussed are the rules and norms in the community; it’s not that these are allowed or prohibited by something else, but rather they are the principles that allow or prohibit other behaviors by users in the future. The legibility of rules and consistency of moderation are absolutely part of the point because they weigh (to a heavy but not overwhelming extent) on whether such rules and norms are desirable, and as a natural corollary, whether a particular system of such norms is ideal. This is because they impact the empirical outcomes of enacting them, such as whether users will understand them, adhere to them, promote them etc. Ignoring the consequences of that is making the same type of error that has been discussed in this thread in depth.
You then followed this with several empirical claims (one of which was based not on any concrete evidence but just on speculation based on claimed general patterns). No, sorry, arguing by definition does not work here.
Well, words can’t really be confused, can they? But yes, the word “necessary” is overloaded, and some or possibly most or maybe even all of its common usages are also vague enough that people who think they’re talking about the same thing can come up with entirely different operationalizations for it, as I have empirically demonstrated, multiple times.
That is not the problem. The problem (or rather—again—one of multiple problems) is that even given a fixed context, people easily can, and do, disagree not only on what “necessary” means in that context, but also on whether any given utterance fits that criterion (on which they disagree), and even on the purpose of having the criterion in the first place. This, again, is given some context.
It’s pretty clear that they don’t. I mean, we’ve had how many years of SSC, and now ACX (and, to a lesser extent, also DSL and Less Wrong) to demonstrate this?
Sorry, no, this doesn’t work. I linked multiple examples of other people disagreeing on what “necessary” means in any given context, including Scott Alexander, who came up with this criterion in the first place, disagreeing with himself about it, in the course of the original post that laid out the concept of moderating according to this criterion!
Uh… I think you should maybe reread what I’m responding to, there.
Yes, people end up with different representations of the same kinds of ideas when you force them to spell out how they conceptualize a certain word when that word does not have a simple representation in natural language. It is the same phenomenon that prompted Critch to label “consciousness” as a conflationary alliance (and Paradiddle accurately identified the underlying issue that caused this seeming disparity in understanding).
In any case, as I have mentioned above, differences in opinion are not only inevitable, but also perfectly fine as long as they do not grow too large or too common. From my comment above: “small disagreements of opinion may remain, but those can be ironed out on a case-by-case basis through the reasoning and discretion of community leaders (often moderators).”
LessWrong and SSC (while it was running) have maintained arguably the highest aggregate qualities of commentary on the Internet, at least among relatively large communities that I know about. I do not see what is being demonstrated, other than precisely the opposite of the notion that norms sketched out in natural language cannot be followed en masse or cannot generate the type of culture that promotes such norms to new users (or users just transitioning from being lurkers to generating content of their own).
I think I’m going to tap out now. Unfortunately, I believe we have exhausted the vast majority of useful avenues of discourse on this matter.
But these aren’t small disagreements. They’re big disagreements.
The SSC case demonstrates precisely that these particular norms, at least, which were sketched out in natural language, cannot be followed en masse or, really, at all, and that (a) trying to enforce them leads to endless arguments (usually started by someone responding to someone else’s comment by demanding to know whether it was kind or necessary—note that the “true” criterion never led to such acrimony!), and (b) the actual result is a steady degradation of comment (and commenter) quality.
The Less Wrong case demonstrates that the whole criterion is unnecessary. (Discussions like this one also demonstrate some other things, but those are secondary.)