First: thank you for writing this. I, for one, appreciate seeing you lay out your reasoning like this.
Second:
We plan to implement the moderation log Said Achmiz recommended, so that if someone is deleting a lot of comments without trace you can at least go and check, and notice patterns.
I applaud this decision, obviously, and look forward to this feature! Transparency is paramount, and I’m very gratified to see that the LW2 team takes it seriously.
Third, some comments on the rest of it, in a somewhat haphazard manner and in no particular order:
On Facebook
I have, on occasion, over the past few years, read Facebook threads wherein “rationalists” discussed rationalist-relevant topics. These threads have been of the widest variety, but one thing they had in common is that the level of intellectual rigor, impressiveness of thinking, good sense, and quality of ideas was… low. To say that I was unimpressed would be an understatement. A few times, reading Facebook posts or comments has actively lowered my level of respect for this or that prominent figure in the community (obviously I won’t name names).
It would, in my view, be catastrophic, if the quality of discussion on Less Wrong matched that of “rationalist Facebook”.
Clearly, your (Raemon’s) opinion is different. I don’t know the explanation for the difference. Several possibilities present themselves—some reflect poorly on me, some on you, some on both or neither of us. I won’t speculate, here, on what the answer is. I only say this to raise the point that your evaluation of the discussions that take place on Facebook is, at least, not obviously correct.
If, however, my evaluation has any grain of truth, then the fact that Facebook discussions are “more enjoyable” is not something that speaks well of said discussions, nor of their participants; and it would be quite misguided indeed, to strive to replicate that quality here.
On old Less Wrong
Eliezer … has always employed a Reign-of-Terror-esque moderation style. You may disagree with this approach, but it’s not new.
I have seen this claim made before, and found it just as puzzling. I invite you to look through Less Wrong (actually, of course, Overcoming Bias) posts ca. 2007-2009. That was the golden age, in my view, when the Sequences were written. Read the comment sections of those posts. Harshly critical comments—much more harshly critical than are now allowed here—stand un-deleted, their authors un-banned. Eliezer even responds to many of them, as do many other commenters; and discussion takes place.
On “fuzzy system 1 stuff”
A recent example is what I’d call “fuzzy system 1 stuff.” The Kensho and Circling threads felt like they were mostly arguing about “is it even okay to talk about fuzzy system 1 intuitions in rational discourse?”.
I can speak for no one but myself here, but if anyone interpreted my comments that way, they were seriously mistaken. In my view, it is absolutely okay to talk about things like that in rational discourse. It must also be okay to explicitly and seriously question whether certain ideas are bad, mistaken, foolish, etc.; and to clearly and firmly reject bad ideas. After all, if it’s okay to talk about something, but not to say negative things about that thing, then the result will be as predictable as it is unfortunate…
On the goal conflict
I simply don’t think that public discussion and knowledge-building are at odds. To the contrary, I think the one is crucial for the other.
Elsewhere, Oliver Habryka commented that Less Wrong should be a safe space for posting bad (i.e., underdeveloped) ideas. If indeed bad means “underdeveloped” and not just plain “bad”, then I agree; but I think that criticism, and open (but civil, of course!) public discussion, is not only not antithetical to this, but in fact necessary, to make it useful.
It is good if underdeveloped ideas can be raised. It is good if they can be criticized. It is good if that criticism is not punished. It is good if the author of the underdeveloped idea responds either with a spirited defense or with “yeah, you’re right, that was a bad idea for the reasons you say—thanks, this was useful!”. This is what we should incentivize. This is how intellectual progress will take place.
Or, to put it another way: criticism of a bad idea does not constitute punishment for putting that idea forth—unless, of course, being criticized inherently causes one to lose face. But why should that be so? There’s only one real reason why, and it reflects quite poorly on a social environment if that reason obtains… Here, on Less Wrong, being criticized should be ok. Responding to criticism should be ok. Argument should be ok.
Otherwise you will get an echo chamber—and if instead of one echo chamber you have multiple ones, each with their own idiosyncratic echoes… well, I simply don’t see how that’s an improvement. Either way the site will have failed in its goal.
I strongly agree about the circling/kensho discussions. Nothing in them looked to me as if anyone was saying it’s not OK to talk about fuzzy system-1 intuitions in rational discourse. My impression of the most-negative comments was that they could be caricatured not as “auggh, get this fuzzy stuff out of my rational discourse” but as “yikes, cultists and mind-manipulators incoming, run away”. Less-caricaturedly: some of that discussion makes me uneasy because it seems as if there is a smallish but influential group of people around here who have adopted a particular set of rather peculiar practices and thought-patterns and want to spread the word about how great they are, but are curiously reluctant to be specific about them to those who haven’t experienced them already—and all that stuff pattern-matches to things I would rather keep a long way away from.
For the avoidance of doubt, the above is not an accusation, pattern-matching is not identity, etc., etc., etc. I mention it mostly because I suspect that uneasiness like mine is a likely source for a lot of the negative reactions, and because it’s a very different thing from thinking that the topic in question should somehow be off-limits in rational discourse.
FYI, I’ve definitely updated that the “fuzzy-system-1 intuitions” not being the concern for most (or at least many) of the critics in Kensho and Circling.
(I do think there’s a related thing, though, which is that every time a post that touches upon fuzzy-system-1 stuff spawns a huge thread of intense argumentation, the sort of person who’d like to post that sort of thread ends up experience a chilling effect that isn’t quite what the critics intended. In a similar although not quite analogous way that simply having the Reign of Terror option can produce a chilling effect on critics)
I also suspect Ray isn’t either, and isn’t saying that in his post, but it’s a long post, so I might have missed something.
The thing I find annoying to deal with is when discussion is subtly more about politics than the actual thing, which Ray does mention.
I feel like people get upvoted because
they voiced any dissenting opinion at all
they include evidence for their point, regardless of how relevant the point is to the conversation
they include technical language or references to technical topics
they cheer for the correct tribes and boo the other tribes
etc.
I appreciated the criticisms raised in my Circling post, and I upvoted a number of the comments that raised objections.
But the subsequent “arguments” often spiraled into people talking past each other and wielding arguments as weapons, etc. And not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I’ve read multiple comments that support this.
I also suspect Ray isn’t either, and isn’t saying that in his post, but it’s a long post, so I might have missed something.
Explicitly chiming in clarify that yes, this is exactly my concern.
I only dedicated a couple paragraphs to this (search for “Incentivizing Good Ideas and Good Criticism”) because there were a lot of different things to talk about, but a central crux of mine is that, while much of the criticism you’ll on on LW is good, a sizeable chunk of it is just a waste of a time and/or actively harmful.
I want better criticism, and I think the central disagreement is something like Said/Cousin_it and a couple others disagreeing strongly with me/Oli/Ben about what makes useful criticism.
(To clarify, I also think that many criticisms in the Circling thread were quite good. For example, it’s very important to determine whether Circling is training introspection/empathy (extrospection?), or ‘just’ inducing hypnosis. This is important both within and without the paradigm that Unreal was describing of using Circling as a tool for epistemic rationality. But, a fair chunk of the comments just seemed to me to express raw bewilderment or hostility in a way that took up a lot of conversational space without moving anything forward.)
But the subsequent “arguments” often spiraled into people … not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I’ve read multiple comments that support this.
Let me confirm your suspicions, then: I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply. There was a good deal of discussion of this in some threads about “Double Crux” a while back (I haven’t the time right now, but later I can dig up the links, if requested). Suffice it to say that there is a deep disagreement here about the nature of disputes, how to resolve them, their causes, etc.
I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply.
This is surprising to me. A crux is a thing that if you didn’t believe it you’d change your mind on some other point—that seems like a very natural concept!
Is your contention that you usually can’t fine any one statement such that if you changed your mind about it, you’d change your mind about the top-level issue? (Interestingly, this is the thrust of top comment by Robin Hanson under Eliezer’s Is That Your True Rejection? post.)
I do not know how to operationalize this into a bet, but I would if I could.
My bet would be something like…
If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people)
Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed.
Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive.
If a person doesn’t work this way, I’d love to know.
There are a lot of rather specific assumptions going into your model, here, and they’re ones that I find to be anywhere between “dubious” to “incomprehensible” to “not really wrong, but thinking of things that way is unhelpful”. (I don’t, to be clear, have any intention of arguing about this here—just pointing it out.) So when you say “If a person doesn’t work this way, I’d love to know.”, I don’t quite know what to say; in my view of things, that question can’t even be asked because many layers of its prerequisites are absent. Does that mean that I “don’t work this way”?
Aw Geez, well if you happen to explain your views somewhere I’d be happy to read them. I can’t find any comments of yours on the Sabien’s Double Crux post or on the post called Contra Double Crux.
I think I’m expecting people to understand what “finding cruxes” looks like, but this is probably unreasonable of me. This is an hour-long class at CFAR, before “Double Crux” is actually taught. And even then, I suspect most people do not actually get the finer, deeper points of Finding Cruxes.
My psychology is interacting with this in some unhelpful way.
I’m living happily without that frustration, because for me agreement isn’t a goal. A comment that disagrees with me is valuable if it contains interesting ideas, no matter the private reasons; if it has no interesting ideas, I simply don’t reply. In my own posts and comments I also optimize for value of information (e.g. bringing up ideas that haven’t been mentioned yet), not for changing anyone’s mind. The game is about win-win trade of interesting ideas, not zero-sum tug of war.
I’m surprised to see finding cruxes contrasted with value of information considerations.
To me, much of the value of looking for cruxes is that it can guide the conversation to the most update-rich areas.
I try to optimize my posts and comments for value of information (e.g. bringing up new ideas).
Correct me if I’m wrong, but I would guess that part of your sense of taste about what makes something an interesting new idea is whether it’s relevant to anything else (in addition to maybe how beautiful or whatever it is on its own). And whether it would make anybody change their mind about anything seems like a pretty big part of relevance. So a significant part of what makes an idea interesting is whether it’s related to your or anybody else’s cruxes, no?
The game to me is about win-win trade of interesting ideas.
Setting aside whether debates between people who disagree are themselves win-win, cruxes are interesting (to me) not just in the context of a debate between opposing sides located in two different people, but also when I’m just thinking about my own take on an issue.
Given these considerations, it seems like the best argument for not being explicit about cruxes, is if they’re already implicit in your sense of taste about what’s interesting, which is correctly guiding you to ask the right questions and look for the right new pieces of information.
That seems plausible, but I’m skeptical that it’s not often helpful to explicitly check what would make you change your mind about something.
I think caring about agreement first vs VoI first leads to different behavior. Here’s two test cases:
1) Someone strongly disagrees with you but doesn’t say anything interesting. Do you ask for their reasons (agreement first) or ignore them and talk to someone else who’s saying interesting but less disagreeable things (VoI first)?
2) You’re one of many people disagreeing with a post. Do you spell out your reasons that are similar to everyone else’s (agreement first) or try to say something new (VoI first)?
The VoI option works better for me. Given the choice whether to bring up something abstractly interesting or something I feel strongly about, I’ll choose the interesting idea every time. It’s more fun and more fruitful.
[ I responded to an older, longer version of cousin_it’s comment here, which was very different from what it looks like at present; right now, my comment doesn’t make a whole lot of sense without that context, but I’ll leave it I guess ]
This is a fascinating, alternative perspective!
If this is what LW is for, then I’ve misjudged it and don’t yet know what to make of it.
To me, the game isn’t about changing minds, but about exchanging interesting ideas to mutual benefit. Zero-sum tugs of war are for political subreddits.
I disagree with the frame.
What I’m into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people’s minds so that we’re all more aligned with truth seems infinite-sum to me.
Why? Because the more groundwork we lay for our foundation, the more we can DO.
Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other’s math? We wouldn’t have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them.
Similarly with rationality. I’m interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we’d unearth a whole NEW set of disagreements to resolve. And so forth. That is progress.
I’m willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better!
But I’m going to look the thing somewhere, if not here.
Yup! That totally makes sense (the stuff in the link) and the thing about the coins.
Also not what I’m trying to talk about here.
I’m not interested in sharing posteriors. I’m interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about).
So in the fair/unfair coin example in the link, the way I’d “change your mind” about whether a coin flip was fair would be to ask, “You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?”
If the answer is, “Well it depends on what happens when the coin is flipped.” And let’s say this is also a Double Crux for me.
At this point we’d have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we’d both converge towards one truth.
I think this is a super important perspective. I also think that stating cruxes is a surprisingly good way to find good pieces of information to propagate. My model of this is something like “a lot of topics show up again and again, which suggests that most participants have already heard the standard arguments and standard perspectives. Focusing on people’s cruxes helps the discussion move towards sharing pieces of information that haven’t been shared yet.”
One of the main problems in Facebook conversations, in my view, is that the bar for commenting is way too low. You generally have to sift through a dozen “Nice! Great idea!” and so on to find the real conversations, and random acquaintances feel free to jump into a high level arguments with strawmans and ad hominem or straight up non sequiturs all the time.
Now I think LW comments seem to have the opposite problem (the bar feels too high), but all else equal this is the correct side to err on.
I like your vision of a perfect should world, but I feel that you’re ignoring the request to deal with the actual world. People do in fact end up disincentivized from posting due to the sorts of criticism you enjoy. Do you believe that this isn’t a problem, or that it is but it’s not worth solving, or that it’s worth solving but there’s a trivial solution?
Ok, then that’s the crux of this argument. Personally, I value Eliezer’s writing and Conor Moreton’s writing more than I value a culture of unfettered criticism.
This seems like a good argument for the archipelago concept? You can have your culture of unfettered criticism on some blogs, and I can read my desired authors on their blogs. Would there be negative consequences for you if that model were followed?
There would of course be negative consequences, but see how tendentious your phrasing is: you ask if there would be negative consequences for me, as if to imply that this is some personal concern about personal benefit or harm.
No; the negative consequences are not for me, but for all of us! Without a “culture of unfettered criticism”, as you say, these very authors’ writings will go un-criticized, their claims will not be challenged, and the quality of their ideas will decline. And if you doubt this, then compare Eliezer’s writing now with his writing of ten years ago, and see that this has already happened.
(This is, of course, not to mention the more obvious harms—the spread of bad ideas through our community consensus being only the most obvious of those.)
And if you doubt this, then compare Eliezer’s writing now with his writing of ten years ago, and see that this has already happened.
I suppose I am probably more impressed by the median sequence post than the median post EY writes to facebook now. But my default explanation would just be that 1) he already picked the low hanging fruit of his best ideas, and 2) regression to the mean—no artist can live up to their greatest work.
Edit: feel compelled to add—still mad respect for modern EY posts. Don’t stop writing buddy. (Not that my opinion would have much effect either way.)
Without a “culture of unfettered criticism”, as you say, these very authors’ writings will go un-criticized, their claims will not be challenged, and the quality of their ideas will decline.
This seems like a leap. Criticism being fettered does not mean criticism is absent.
I was quoting PeterBorah; that is the phrasing he used. I kept it in quotes because I don’t endorse it myself. The fact is, “fettered criticism” is a euphemism.
What precisely it’s a euphemism for may vary somewhat from context to context—by the nature of the ‘fetters’, so to speak—and these themselves will be affected by the incentives in place (such as the precise implementation and behavior of the moderation tools available to authors, among others).
But one thing it can easily be a euphemism for, is “actually no substantive criticism at all”.
As for my conclusion being a leap—as I say, the predicted outcome has already taken place. There is no need for speculation. (And it is, of course, only one example out of many.)
I would take your perspective more seriously here if you ever wrote top-level posts. As matters stand, all you do is comment, so your incentives are skewed; I don’t think you understand the perspective of a person who’s considering whether it’s worth investing time and effort into writing a top-level post, and the discussion here is about how to make LW suck less for the highest-quality such people (Eliezer, Conor, etc.).
I do not write top-level posts because my standards for ideas that are sufficiently important, valuable, novel, etc., to justify contributing to the flood of words that is the blogosphere, are fairly high. I would be most gratified to see more people follow my example.
I also think that there is great value to be found in commentary (including criticism). Some of my favorite pieces of writing, from which I’ve gotten great insight and great enjoyment, are in this genre. Some of the writers and intellectuals I most respect are famous largely for their commentary on the ideas of others, and for their incisive criticism of those ideas. To criticize is not to rebuke—it is to contribute; it is to give of one’s time and mental energy, in order to participate in the collective project of cutting away the nonsense and the irrelevancies from the vast and variegated mass of ideas that we are always and unceasingly generating, and to get one step closer to the truth.
In his book Brainstorms, Daniel Dennett quotes the poet Paul Valéry:
“It takes two to invent anything.” He was not referring to collaborative partnerships between people, but to a bifurcation in the individual inventor. “The one”, he says, “makes up combinations; the other one chooses, recognizes what he wishes and what is important to him in the mass of the things which the former has imparted to him. What we call genius is much less the work of that first one than the readiness of the second one to grasp the value of what has been laid before him and to choose.”
We have had, in these past few years (in the “rationalist Diaspora”) and in these past few months (here on Less Wrong 2.0), a great flowering of the former sort of activity. We have neglected the latter. It is good, I think, to try and rectify that imbalance.
I endorse Said’s view, and I’ve written a couple of frontpage posts.
I also add that I think Said is a particularly able and shrewd critic, and I think LW2 would be much poorer if there was a chilling effect on his contributions.
I don’t know the explanation for the difference. Several possibilities present themselves—some reflect poorly on me, some on you, some on both or neither of us.
I’ve definitely facepalmed reading rationalists commenting on FB.
My guess is that it’s not “Facebook” that’s the relevant factor, but “Facebook + level of privacy.”
Comments on Public posts are abysmal. Comments on my friends-only posts, sadly, get out of hand too; although not quite as bad as Public. Comments on curated private lists and groups with <300 people on them have been quite good, IME, and have high quality density. (Obviously depends though. Not all groups with those params have this feature.)
LW is very clearly better than some of these. But I think it compares poorly to the well-kept gardens.
(( I am making these points separately from the ‘echo chamber’ concern. ))
Huh. I’ve generally had very good conversations on my Facebook statuses, and all of my statuses are Public by default. But I also aggressively delete unproductive comments (which I don’t have to do very often), and also I generally don’t try to talk about demon thread-y topics on Facebook.
On this, sadly, I cannot speak—I do not have a Facebook account myself, so I am privy to none of these private / friends-only / curated / etc. conversations. It may be as you say.
Of course, if that’s so, then that can’t be replicated on Less Wrong—since here, presumably, every post is public!
Eliezer … has always employed a Reign-of-Terror-esque moderation style. You may disagree with this approach, but it’s not new.
I have seen this claim made before, and found it just as puzzling. I invite you to look through Less Wrong (actually, of course, Overcoming Bias) posts ca. 2007-2009. That was the golden age, in my view, when the Sequences were written. Read the comment sections of those posts. Harshly critical comments—much more harshly critical than are now allowed here—stand un-deleted, their authors un-banned. Eliezer even responds to many of them, as do many other commenters; and discussion takes place.
This doesn’t seem like strong evidence that EY wasn’t moderating. It might just tell you what kinds of things he was willing to allow. (I don’t actually know how much he was moderating at that point.)
I certainly never made the claim that Eliezer wasn’t moderating. Of course he was. But as I said in a previous discussion of this claim:
[The idea that Eliezer used a “Reign of Terror” moderation style, and deleted “unproductive discussion”] is only true under a very, very different (i.e., much more lax) standard of what qualifies as “unproductive discussion”—so different as to constitute an entirely other sort of regime. Calling Sequence-era OB/LW “highly moderated” seems to me like a serious misuse of the term. I invite you to go back to many of the posts of 2007-2009 and look for yourself.
If moderation standards across Less Wrong 2.0 are no stricter than those employed on Sequence-era LW/OB, then my concerns largely fall away.
I think Eliezer has a different set of “reasons a comment might aggravate him” than most of the other authors who’ve complained to us. (note: I’m not that confident in the following, and I don’t want this to turn into a psychoanalyze Eliezer subthread and will lock it if it appears to do that)
I think one of the common failure modes he wants the ability to delete are “comments that tug a discussion sideways into social-reality-space” where people’s status/tribal modes kick in, distorting people’s epistemics and the topic of the conversation. In particular, comments that subtly do this in such a way that most people won’t notice, but the decline becomes inevitable, and there’s no way to engage with the discussion that doesn’t feed into the problem.
I think looking at his current Facebook Wall (where he deletes things that annoy him) is a pretty reasonable look into what you might expect his comments on LW to look like.
But, speaking of that:
I think an important factor to consider in your calculus is that the end result of the the 2 years of great comments you refer to, was Eliezer getting tired of dealing with bullshit and moving to Facebook where him moderating heavily results in fewer complaints (since it’s more obviously his personal fiefdom as opposed to setting sitewide norms of what moderation is acceptable). And he intends to stay there unless LW enables some FB-like features.
I’m not 100% sure what the timeline was and whether 2007-09 was a problem, but the end result of Eliezer engaging in public discussion was deciding that he didn’t want to to that anymore, and that’s exactly the problem that this post is trying to solve, and that I don’t think you’re engaging with the costs of.
It seems to me like our views interact as follows, then:
I say that in the absence of open and lively criticism, bad ideas proliferate, echo chambers are built, and discussion degenerates into streams of sheer nonsense.
You say that in the presence of [what I call] open and lively criticism, authors get tired of dealing with their critics, and withdraw into “safer” spaces.
Perhaps we are both right. What guarantee is there, that this problem can be solved at all? Who promised us that a solution could be found? Must there be a “middle way”, that avoids the better part of both forms of failure? I do not see any reason to be certain of that…
Suppose we accept this pessimistic view. What does that imply, for charting the way forward?
I don’t know. I have only speculations. Here is one:
Perhaps we ought to consider, not the effects of our choice of norms on behavior of given authors, but rather two things:
For what sorts of authors, and for what sorts of ideas, does either sort of norm (when implemented in a public space like Less Wrong) select?
What effects, then, does either sort of norm have, on public consensus, publicly widespread ideas, etc.?
First: thank you for writing this. I, for one, appreciate seeing you lay out your reasoning like this.
Second:
I applaud this decision, obviously, and look forward to this feature! Transparency is paramount, and I’m very gratified to see that the LW2 team takes it seriously.
Third, some comments on the rest of it, in a somewhat haphazard manner and in no particular order:
On Facebook
I have, on occasion, over the past few years, read Facebook threads wherein “rationalists” discussed rationalist-relevant topics. These threads have been of the widest variety, but one thing they had in common is that the level of intellectual rigor, impressiveness of thinking, good sense, and quality of ideas was… low. To say that I was unimpressed would be an understatement. A few times, reading Facebook posts or comments has actively lowered my level of respect for this or that prominent figure in the community (obviously I won’t name names).
It would, in my view, be catastrophic, if the quality of discussion on Less Wrong matched that of “rationalist Facebook”.
Clearly, your (Raemon’s) opinion is different. I don’t know the explanation for the difference. Several possibilities present themselves—some reflect poorly on me, some on you, some on both or neither of us. I won’t speculate, here, on what the answer is. I only say this to raise the point that your evaluation of the discussions that take place on Facebook is, at least, not obviously correct.
If, however, my evaluation has any grain of truth, then the fact that Facebook discussions are “more enjoyable” is not something that speaks well of said discussions, nor of their participants; and it would be quite misguided indeed, to strive to replicate that quality here.
On old Less Wrong
I have seen this claim made before, and found it just as puzzling. I invite you to look through Less Wrong (actually, of course, Overcoming Bias) posts ca. 2007-2009. That was the golden age, in my view, when the Sequences were written. Read the comment sections of those posts. Harshly critical comments—much more harshly critical than are now allowed here—stand un-deleted, their authors un-banned. Eliezer even responds to many of them, as do many other commenters; and discussion takes place.
On “fuzzy system 1 stuff”
I can speak for no one but myself here, but if anyone interpreted my comments that way, they were seriously mistaken. In my view, it is absolutely okay to talk about things like that in rational discourse. It must also be okay to explicitly and seriously question whether certain ideas are bad, mistaken, foolish, etc.; and to clearly and firmly reject bad ideas. After all, if it’s okay to talk about something, but not to say negative things about that thing, then the result will be as predictable as it is unfortunate…
On the goal conflict
I simply don’t think that public discussion and knowledge-building are at odds. To the contrary, I think the one is crucial for the other.
Elsewhere, Oliver Habryka commented that Less Wrong should be a safe space for posting bad (i.e., underdeveloped) ideas. If indeed bad means “underdeveloped” and not just plain “bad”, then I agree; but I think that criticism, and open (but civil, of course!) public discussion, is not only not antithetical to this, but in fact necessary, to make it useful.
It is good if underdeveloped ideas can be raised. It is good if they can be criticized. It is good if that criticism is not punished. It is good if the author of the underdeveloped idea responds either with a spirited defense or with “yeah, you’re right, that was a bad idea for the reasons you say—thanks, this was useful!”. This is what we should incentivize. This is how intellectual progress will take place.
Or, to put it another way: criticism of a bad idea does not constitute punishment for putting that idea forth—unless, of course, being criticized inherently causes one to lose face. But why should that be so? There’s only one real reason why, and it reflects quite poorly on a social environment if that reason obtains… Here, on Less Wrong, being criticized should be ok. Responding to criticism should be ok. Argument should be ok.
Otherwise you will get an echo chamber—and if instead of one echo chamber you have multiple ones, each with their own idiosyncratic echoes… well, I simply don’t see how that’s an improvement. Either way the site will have failed in its goal.
I strongly agree about the circling/kensho discussions. Nothing in them looked to me as if anyone was saying it’s not OK to talk about fuzzy system-1 intuitions in rational discourse. My impression of the most-negative comments was that they could be caricatured not as “auggh, get this fuzzy stuff out of my rational discourse” but as “yikes, cultists and mind-manipulators incoming, run away”. Less-caricaturedly: some of that discussion makes me uneasy because it seems as if there is a smallish but influential group of people around here who have adopted a particular set of rather peculiar practices and thought-patterns and want to spread the word about how great they are, but are curiously reluctant to be specific about them to those who haven’t experienced them already—and all that stuff pattern-matches to things I would rather keep a long way away from.
For the avoidance of doubt, the above is not an accusation, pattern-matching is not identity, etc., etc., etc. I mention it mostly because I suspect that uneasiness like mine is a likely source for a lot of the negative reactions, and because it’s a very different thing from thinking that the topic in question should somehow be off-limits in rational discourse.
FYI, I’ve definitely updated that the “fuzzy-system-1 intuitions” not being the concern for most (or at least many) of the critics in Kensho and Circling.
(I do think there’s a related thing, though, which is that every time a post that touches upon fuzzy-system-1 stuff spawns a huge thread of intense argumentation, the sort of person who’d like to post that sort of thread ends up experience a chilling effect that isn’t quite what the critics intended. In a similar although not quite analogous way that simply having the Reign of Terror option can produce a chilling effect on critics)
I, for one, am not anti-criticism.
I also suspect Ray isn’t either, and isn’t saying that in his post, but it’s a long post, so I might have missed something.
The thing I find annoying to deal with is when discussion is subtly more about politics than the actual thing, which Ray does mention.
I feel like people get upvoted because
they voiced any dissenting opinion at all
they include evidence for their point, regardless of how relevant the point is to the conversation
they include technical language or references to technical topics
they cheer for the correct tribes and boo the other tribes
etc.
I appreciated the criticisms raised in my Circling post, and I upvoted a number of the comments that raised objections.
But the subsequent “arguments” often spiraled into people talking past each other and wielding arguments as weapons, etc. And not looking for cruxes, which I find to be an alarmingly common thing here, to the degree that I suspect people do not in fact WANT their cruxes to be on the table, and I’ve read multiple comments that support this.
Explicitly chiming in clarify that yes, this is exactly my concern.
I only dedicated a couple paragraphs to this (search for “Incentivizing Good Ideas and Good Criticism”) because there were a lot of different things to talk about, but a central crux of mine is that, while much of the criticism you’ll on on LW is good, a sizeable chunk of it is just a waste of a time and/or actively harmful.
I want better criticism, and I think the central disagreement is something like Said/Cousin_it and a couple others disagreeing strongly with me/Oli/Ben about what makes useful criticism.
(To clarify, I also think that many criticisms in the Circling thread were quite good. For example, it’s very important to determine whether Circling is training introspection/empathy (extrospection?), or ‘just’ inducing hypnosis. This is important both within and without the paradigm that Unreal was describing of using Circling as a tool for epistemic rationality. But, a fair chunk of the comments just seemed to me to express raw
bewilderment or hostility in a way that took up a lot of conversational space without moving anything forward.)
Let me confirm your suspicions, then: I simply don’t think the concept of the “crux” (as CFAR & co. use it) is nearly as universally applicable to disagreements as you (and others here) seem to imply. There was a good deal of discussion of this in some threads about “Double Crux” a while back (I haven’t the time right now, but later I can dig up the links, if requested). Suffice it to say that there is a deep disagreement here about the nature of disputes, how to resolve them, their causes, etc.
This is surprising to me. A crux is a thing that if you didn’t believe it you’d change your mind on some other point—that seems like a very natural concept!
Is your contention that you usually can’t fine any one statement such that if you changed your mind about it, you’d change your mind about the top-level issue? (Interestingly, this is the thrust of top comment by Robin Hanson under Eliezer’s Is That Your True Rejection? post.)
I do not know how to operationalize this into a bet, but I would if I could.
My bet would be something like…
If a person can Belief Report / do Focusing on their beliefs (this might already eliminate a bunch of people)
Then I bet some lower-level belief-node (a crux) could be found that would alter the upper-level belief-nodes if the value/sign/position/weight of that cruxy node were to be changed.
Note: Belief nodes do not have be binary (0 or 1). They can be fuzzy (0-1). Belief nodes can also be conjunctive.
If a person doesn’t work this way, I’d love to know.
There are a lot of rather specific assumptions going into your model, here, and they’re ones that I find to be anywhere between “dubious” to “incomprehensible” to “not really wrong, but thinking of things that way is unhelpful”. (I don’t, to be clear, have any intention of arguing about this here—just pointing it out.) So when you say “If a person doesn’t work this way, I’d love to know.”, I don’t quite know what to say; in my view of things, that question can’t even be asked because many layers of its prerequisites are absent. Does that mean that I “don’t work this way”?
Aw Geez, well if you happen to explain your views somewhere I’d be happy to read them. I can’t find any comments of yours on the Sabien’s Double Crux post or on the post called Contra Double Crux.
The moderators moved my comments originally made on former post… to… this post.
I think I’m expecting people to understand what “finding cruxes” looks like, but this is probably unreasonable of me. This is an hour-long class at CFAR, before “Double Crux” is actually taught. And even then, I suspect most people do not actually get the finer, deeper points of Finding Cruxes.
My psychology is interacting with this in some unhelpful way.
I’m living happily without that frustration, because for me agreement isn’t a goal. A comment that disagrees with me is valuable if it contains interesting ideas, no matter the private reasons; if it has no interesting ideas, I simply don’t reply. In my own posts and comments I also optimize for value of information (e.g. bringing up ideas that haven’t been mentioned yet), not for changing anyone’s mind. The game is about win-win trade of interesting ideas, not zero-sum tug of war.
I’m surprised to see finding cruxes contrasted with value of information considerations.
To me, much of the value of looking for cruxes is that it can guide the conversation to the most update-rich areas.
Correct me if I’m wrong, but I would guess that part of your sense of taste about what makes something an interesting new idea is whether it’s relevant to anything else (in addition to maybe how beautiful or whatever it is on its own). And whether it would make anybody change their mind about anything seems like a pretty big part of relevance. So a significant part of what makes an idea interesting is whether it’s related to your or anybody else’s cruxes, no?
Setting aside whether debates between people who disagree are themselves win-win, cruxes are interesting (to me) not just in the context of a debate between opposing sides located in two different people, but also when I’m just thinking about my own take on an issue.
Given these considerations, it seems like the best argument for not being explicit about cruxes, is if they’re already implicit in your sense of taste about what’s interesting, which is correctly guiding you to ask the right questions and look for the right new pieces of information.
That seems plausible, but I’m skeptical that it’s not often helpful to explicitly check what would make you change your mind about something.
I think caring about agreement first vs VoI first leads to different behavior. Here’s two test cases:
1) Someone strongly disagrees with you but doesn’t say anything interesting. Do you ask for their reasons (agreement first) or ignore them and talk to someone else who’s saying interesting but less disagreeable things (VoI first)?
2) You’re one of many people disagreeing with a post. Do you spell out your reasons that are similar to everyone else’s (agreement first) or try to say something new (VoI first)?
The VoI option works better for me. Given the choice whether to bring up something abstractly interesting or something I feel strongly about, I’ll choose the interesting idea every time. It’s more fun and more fruitful.
Gotcha, this makes sense to me. I would want to follow the VoI strategy in each of your two test cases.
[ I responded to an older, longer version of cousin_it’s comment here, which was very different from what it looks like at present; right now, my comment doesn’t make a whole lot of sense without that context, but I’ll leave it I guess ]
This is a fascinating, alternative perspective!
If this is what LW is for, then I’ve misjudged it and don’t yet know what to make of it.
I disagree with the frame.
What I’m into is having a community steered towards seeking truth together. And this is NOT a zero-sum game at all. Changing people’s minds so that we’re all more aligned with truth seems infinite-sum to me.
Why? Because the more groundwork we lay for our foundation, the more we can DO.
Were rockets built by people who just exchanged interesting ideas for rocket-building but never bothered to check each other’s math? We wouldn’t have gotten very far if this is where we stayed. So resolving each layer of disagreement led to being able to coordinate on how to build rockets and then building them.
Similarly with rationality. I’m interested in changing your mind about a lot of things. I want to convince you that I can and am seeing things in the universe that, if we can agree on them one way or another, would then allow us to move to the next step, where we’d unearth a whole NEW set of disagreements to resolve. And so forth. That is progress.
I’m willing to concede that LW might not be for this thing, and that seems maybe fine. It might even be better!
But I’m going to look the thing somewhere, if not here.
(I had a mathy argument here, pointing to this post as a motivation for exchanging ideas instead of changing minds. It had an error, so retracted.)
Yup! That totally makes sense (the stuff in the link) and the thing about the coins.
Also not what I’m trying to talk about here.
I’m not interested in sharing posteriors. I’m interested in sharing the methods for which people arrive at their posteriors (this is what Double Crux is all about).
So in the fair/unfair coin example in the link, the way I’d “change your mind” about whether a coin flip was fair would be to ask, “You seem to think the coin has a 39% chance of being unfair. What would change your mind about that?”
If the answer is, “Well it depends on what happens when the coin is flipped.” And let’s say this is also a Double Crux for me.
At this point we’d have to start sharing our evidence or gathering more evidence to actually resolve the disagreement. And once we did, we’d both converge towards one truth.
I think this is a super important perspective. I also think that stating cruxes is a surprisingly good way to find good pieces of information to propagate. My model of this is something like “a lot of topics show up again and again, which suggests that most participants have already heard the standard arguments and standard perspectives. Focusing on people’s cruxes helps the discussion move towards sharing pieces of information that haven’t been shared yet.”
+1 on the concerns about Facebook conversations.
One of the main problems in Facebook conversations, in my view, is that the bar for commenting is way too low. You generally have to sift through a dozen “Nice! Great idea!” and so on to find the real conversations, and random acquaintances feel free to jump into a high level arguments with strawmans and ad hominem or straight up non sequiturs all the time.
Now I think LW comments seem to have the opposite problem (the bar feels too high), but all else equal this is the correct side to err on.
I like your vision of a perfect should world, but I feel that you’re ignoring the request to deal with the actual world. People do in fact end up disincentivized from posting due to the sorts of criticism you enjoy. Do you believe that this isn’t a problem, or that it is but it’s not worth solving, or that it’s worth solving but there’s a trivial solution?
Remember that Policy Debates Should Not Appear One-Sided.
Indeed, it is not a problem; it is a solution.
Ok, then that’s the crux of this argument. Personally, I value Eliezer’s writing and Conor Moreton’s writing more than I value a culture of unfettered criticism.
This seems like a good argument for the archipelago concept? You can have your culture of unfettered criticism on some blogs, and I can read my desired authors on their blogs. Would there be negative consequences for you if that model were followed?
There would of course be negative consequences, but see how tendentious your phrasing is: you ask if there would be negative consequences for me, as if to imply that this is some personal concern about personal benefit or harm.
No; the negative consequences are not for me, but for all of us! Without a “culture of unfettered criticism”, as you say, these very authors’ writings will go un-criticized, their claims will not be challenged, and the quality of their ideas will decline. And if you doubt this, then compare Eliezer’s writing now with his writing of ten years ago, and see that this has already happened.
(This is, of course, not to mention the more obvious harms—the spread of bad ideas through our community consensus being only the most obvious of those.)
I suppose I am probably more impressed by the median sequence post than the median post EY writes to facebook now. But my default explanation would just be that 1) he already picked the low hanging fruit of his best ideas, and 2) regression to the mean—no artist can live up to their greatest work.
Edit: feel compelled to add—still mad respect for modern EY posts. Don’t stop writing buddy. (Not that my opinion would have much effect either way.)
I actually prefer the average post in Inadequate Equilibria quite a bit over the average post in the sequences.
This seems like a leap. Criticism being fettered does not mean criticism is absent.
I was quoting PeterBorah; that is the phrasing he used. I kept it in quotes because I don’t endorse it myself. The fact is, “fettered criticism” is a euphemism.
What precisely it’s a euphemism for may vary somewhat from context to context—by the nature of the ‘fetters’, so to speak—and these themselves will be affected by the incentives in place (such as the precise implementation and behavior of the moderation tools available to authors, among others).
But one thing it can easily be a euphemism for, is “actually no substantive criticism at all”.
As for my conclusion being a leap—as I say, the predicted outcome has already taken place. There is no need for speculation. (And it is, of course, only one example out of many.)
I would take your perspective more seriously here if you ever wrote top-level posts. As matters stand, all you do is comment, so your incentives are skewed; I don’t think you understand the perspective of a person who’s considering whether it’s worth investing time and effort into writing a top-level post, and the discussion here is about how to make LW suck less for the highest-quality such people (Eliezer, Conor, etc.).
I do not write top-level posts because my standards for ideas that are sufficiently important, valuable, novel, etc., to justify contributing to the flood of words that is the blogosphere, are fairly high. I would be most gratified to see more people follow my example.
I also think that there is great value to be found in commentary (including criticism). Some of my favorite pieces of writing, from which I’ve gotten great insight and great enjoyment, are in this genre. Some of the writers and intellectuals I most respect are famous largely for their commentary on the ideas of others, and for their incisive criticism of those ideas. To criticize is not to rebuke—it is to contribute; it is to give of one’s time and mental energy, in order to participate in the collective project of cutting away the nonsense and the irrelevancies from the vast and variegated mass of ideas that we are always and unceasingly generating, and to get one step closer to the truth.
In his book Brainstorms, Daniel Dennett quotes the poet Paul Valéry:
We have had, in these past few years (in the “rationalist Diaspora”) and in these past few months (here on Less Wrong 2.0), a great flowering of the former sort of activity. We have neglected the latter. It is good, I think, to try and rectify that imbalance.
I endorse Said’s view, and I’ve written a couple of frontpage posts.
I also add that I think Said is a particularly able and shrewd critic, and I think LW2 would be much poorer if there was a chilling effect on his contributions.
I’ve written front page posts before and largely endorse Said’s view.
At the same time however I think the thing Raemon and others are discussing is real, and I discuss it myself in Guided Mental Change Requires High Trust.
I’ve definitely facepalmed reading rationalists commenting on FB.
My guess is that it’s not “Facebook” that’s the relevant factor, but “Facebook + level of privacy.”
Comments on Public posts are abysmal. Comments on my friends-only posts, sadly, get out of hand too; although not quite as bad as Public. Comments on curated private lists and groups with <300 people on them have been quite good, IME, and have high quality density. (Obviously depends though. Not all groups with those params have this feature.)
LW is very clearly better than some of these. But I think it compares poorly to the well-kept gardens.
(( I am making these points separately from the ‘echo chamber’ concern. ))
Huh. I’ve generally had very good conversations on my Facebook statuses, and all of my statuses are Public by default. But I also aggressively delete unproductive comments (which I don’t have to do very often), and also I generally don’t try to talk about demon thread-y topics on Facebook.
On this, sadly, I cannot speak—I do not have a Facebook account myself, so I am privy to none of these private / friends-only / curated / etc. conversations. It may be as you say.
Of course, if that’s so, then that can’t be replicated on Less Wrong—since here, presumably, every post is public!
This doesn’t seem like strong evidence that EY wasn’t moderating. It might just tell you what kinds of things he was willing to allow. (I don’t actually know how much he was moderating at that point.)
I certainly never made the claim that Eliezer wasn’t moderating. Of course he was. But as I said in a previous discussion of this claim:
If moderation standards across Less Wrong 2.0 are no stricter than those employed on Sequence-era LW/OB, then my concerns largely fall away.
I think Eliezer has a different set of “reasons a comment might aggravate him” than most of the other authors who’ve complained to us. (note: I’m not that confident in the following, and I don’t want this to turn into a psychoanalyze Eliezer subthread and will lock it if it appears to do that)
I think one of the common failure modes he wants the ability to delete are “comments that tug a discussion sideways into social-reality-space” where people’s status/tribal modes kick in, distorting people’s epistemics and the topic of the conversation. In particular, comments that subtly do this in such a way that most people won’t notice, but the decline becomes inevitable, and there’s no way to engage with the discussion that doesn’t feed into the problem.
I think looking at his current Facebook Wall (where he deletes things that annoy him) is a pretty reasonable look into what you might expect his comments on LW to look like.
But, speaking of that:
I think an important factor to consider in your calculus is that the end result of the the 2 years of great comments you refer to, was Eliezer getting tired of dealing with bullshit and moving to Facebook where him moderating heavily results in fewer complaints (since it’s more obviously his personal fiefdom as opposed to setting sitewide norms of what moderation is acceptable). And he intends to stay there unless LW enables some FB-like features.
I’m not 100% sure what the timeline was and whether 2007-09 was a problem, but the end result of Eliezer engaging in public discussion was deciding that he didn’t want to to that anymore, and that’s exactly the problem that this post is trying to solve, and that I don’t think you’re engaging with the costs of.
It seems to me like our views interact as follows, then:
I say that in the absence of open and lively criticism, bad ideas proliferate, echo chambers are built, and discussion degenerates into streams of sheer nonsense.
You say that in the presence of [what I call] open and lively criticism, authors get tired of dealing with their critics, and withdraw into “safer” spaces.
Perhaps we are both right. What guarantee is there, that this problem can be solved at all? Who promised us that a solution could be found? Must there be a “middle way”, that avoids the better part of both forms of failure? I do not see any reason to be certain of that…
Suppose we accept this pessimistic view. What does that imply, for charting the way forward?
I don’t know. I have only speculations. Here is one:
Perhaps we ought to consider, not the effects of our choice of norms on behavior of given authors, but rather two things:
For what sorts of authors, and for what sorts of ideas, does either sort of norm (when implemented in a public space like Less Wrong) select?
What effects, then, does either sort of norm have, on public consensus, publicly widespread ideas, etc.?