Noting that I do agree with this particular claim.
I see the situation as:
There are, in fact, good reasons that its hard to communicate and demonstrate some things, and that hyperfocus on “what can be made legible to a third party” results in a lot of looking under street lamps, rather than where the value necessarily is. I have very different priors than Said on how suspicious CFAR’s actions are, as well as different firsthand experience that leads me to believe there’s a lot of value in CFAR’s work that Said presumably dismisses.
[this is not “zero suspicion”, but I bet my suspicion takes a very different shape than Said’s]
But, it’s still important for group rationality to have sound game theory re: what sort of ideas gain what sort of momentum. An important meta-agreement/policy is for people and organizations to be clear about the epistemic status of their ideas and positions.
I think it takes effort to maintain the right epistemic state, as groups and as individuals. So I think it would have been better if CFAR explicitly stated in their handbook, or in a public blogpost*, that “yes, this limits how much people should trust us, and right now we think it’s more important for us to focus on developing a good product than trying to make our current ideas legibly trustworthy.”
As habryka goes into here, I think there are some benefits for researchers to focus internally for a while. The benefits are high bandwidth communication, and being able to push ideas farther and faster than they would if they were documenting every possibly-dead-end-approach at every step of the way. But, after a few years of this, it’s important to write up your findings in a more public/legible way, both so that others can critique it and so that others can build on it.
CFAR seems overdue for this. But, also, CFAR has had lots of staff turnover by now and it’s not that useful to think in terms of “what CFAR ought to do” vs “what people who are invested in the CFAR paradigm should do.” (The Multi-Agent Models sequence is a good step here. I think good next steps would be someone writing up several other aspects of the CFAR paradigm with a similar degree of clarity, and good steps after that would be to think about what good critique/evaluation would look like)
I see this as needing a something of a two-way contract, where:
Private researchers credibly commit to doing more public writeups (even though it’s a lot of work that often won’t result in immediate benefit), and at the very least writing up quick, clear epistemic statuses of how people should relate to the research in the meanwhile.
Third party skeptics develop a better understanding of what sort of standards are reasonable, and “cutting researchers exactly the right amount of slack.” I think there’s good reason at this point to be like “geez, CFAR, can you actually write up your stuff and put reasonable disclaimers on things and not ride a wave of vague illegible endorsement?” But my impression is that even if CFAR did all the right things and checked all the right boxes, people would still be frustrated, because the domain CFAR is trying to excel at is in fact very difficult, and a rush towards demonstrability wouldn’t be useful. And I think good criticism needs to understand that.
*I’m not actually sure they haven’t made a public blogpost about this.
“It is very difficult to find a black cat in a dark room—especially when the cat is not there.”
I’ve quoted this saying before, more than once, but I think it’s very applicable to CFAR, and I think it is long past time we acknowledged this.
The point is this: yes, it is possible that what CFAR is trying to excel at is in fact very difficult. It could, indeed, be that CFAR’s techniques are genuinely difficult to teach—and forget about trying to teach them via (non-personally-customized!) text. It could be, yes, that CFAR’s progress in toward the problem they are attacking is, while quite real, nonetheless fiendishly difficult to demonstrate, in any sort of legible way. All of these things are possible.
But there is another possibility. It could be that what CFAR is trying to excel at is not very difficult, but impossible. And because of this, it could be that CFAR’s techniques are not hard to teach, but rather there is nothing there to be taught. It could be that CFAR’s progress toward the problem they are attacking, is not hard to demonstrate, but rather nonexistent.
What you say is not, as far as it goes, wrong. But it seems to be predicated on the notion that there is a cat in the room—hidden quite well, perhaps, but nonetheless findable; and also, secondarily, on the notion that what CFAR is doing to look for said cat is even approximately the sort of thing which has any chance at all of finding it.
But while it may have been sensible to start (fully 10 years ago, now!) with the assumption of an existing and definitely present cat, that was then; we’ve since had 10 years of searching, and no apparent progress. Now, it is a very large room, and the illumination is not at all bright, and we can expect no help from the cat, in our efforts to find it. So perhaps this is the sort of thing that takes 10 years to produce results, or 50, or 500—who knows! But the more time passes with no results, the more we should re-examine our initial assumption. It is even worth considering the possibility of calling off the search. Maybe the cat is just too well-hidden (at least, for now).
Yet CFAR does claim that they’ve found something, don’t they? Not the whole cat, perhaps, but its tail, at least. But they can’t show it to us, because, apparently—contrary to what we thought we were looking for, and expecting to find—it turns out that our ideas about what sort of thing a cat is, and what it even means to find a cat, and how ones knows when one has found it, were mistaken!
Well, I’ve worn out this analogy enough, so here’s my conclusion:
As far as I can tell, CFAR has found nothing (or as near to nothing as hardly matters). The sorts of things they seem to be claiming to have found (to the extent that they’re being at all clear) don’t really seem anything like the sorts of things they set out to look for. And that extent is not very large, because they’re being far, far more secretive than seems to me to be remotely warranted. And CFAR’s behavior in the past has not exactly predisposed me to believe that they are, or will be, honest in reporting their progress (or lack thereof).
If CFAR has found nothing, because the goal is ambitious and progress is difficult, they should say so. “We worked on figuring out how to make humans more rational for nearly a decade but so far, all we have is a catalog of a bunch of things that don’t really work”—that is understandable. CFAR’s actual behavior, not so much—or not, at least, without assuming dishonesty or other deficits of virtue.
On the “is there something worth teaching there” front, I think you’re just wrong, and obviously so from my perspective (since I have, in fact, learned things. Sunset at Noon is probably the best writeup of what CFAR-descended things I’ve learned and why they’re valuable to me).
This doesn’t mean you’re obligated to believe me. I put moderate probability on “There is variation on what techniques are useful for what people, and Said’s mind is shaped such that the CFAR paradigm isn’t useful, and it will never be legible to Said that the CFAR paradigm is useful.” But, enough words have been spent trying to demonstrate things to you that seem obvious to me that it doesn’t seem worth further time on it.
The Multi-Agent Model of Mind is the best current writeup of (one of) the important elements of what I think of as the CFAR paradigm. I think it’d be more useful for you to critique that than to continue this conversation.
I have read your post Sunset at Noon; I do not recall finding much to comment on at all, there (I didn’t really get the impression that it was meant to be a “writeup of … CFAR-descended things”!), but I will re-read it and get back to you.
As for the multi-agent model of mind, I have already critiqued it, though, admittedly, in a haphazard way—a comment here, a comment there… I have not bothered to critique more in-depth, or in a more targeted way, because… well, to be frank, because of the many times when I’ve attempted to really critique anything, on the new Less Wrong, and been rebuffed for being insufficiently “prosocial”, insufficiently “nice”, etc., etc.
Should I understand your suggestion to mean that if I post critiquing comments on some of the posts in the sequence you linked, they will not spawn lengthy threads about niceness, will not be met with moderator comments about whether I am inserting enough caveats about how of course I respect the OP very much, etc.?
I’m sorry if this seems blunt or harsh; I really don’t mean to be antagonistic toward you in particular (or anyone, really)! But if you reply to my critical comments by saying “but we have put forth our best effort; the ball’s in your court now” (an entirely fair response, if true!), then before I run with said ball, I need to know it’s not a waste of my time.
And, to be clear, if you respond with “no, all the politeness rules still apply, you have to stick to them if you want to critique these writeups [on Less Wrong]”, then—fair enough! But I’d like to know it in advance. (In such a case, I of course will not post any such critiques here; I may post them elsewhere, or not at all, as I find convenient.)
I think that your past criticisms have been useful, and I’ve explicitly tried to take them into account in the sequence. E.g. the way I defined subagents in the first post of the sequence, was IIRC in part copy-pasted from an earlier response to you, and it was your previous comment that helped/forced me to clarify what exactly I meant. I’d in fact been hoping to see more comments from you on the posts, and expect them to be useful regardless of the tone.
I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts. But:
a) if I had written the posts, I would see them as “yes, now these are actually at the stage where the sort of critique Said does is more relevant.” I still think it’d be most useful if you came at it from the frame of “What product is Kaj trying to build, and if I think that product isn’t useful, are there different products that would better solve the problem that Kaj’s product is trying to solve?”
b) relatedly, if you have criticism of the Sunset at Noon content I’d be interested in that. (this is not a general rule about whether I want critiques of that sort. Most of my work is downstream of CFAR paradigm stuff, and I don’t want most of my work to turn into a debate about CFAR. But it does seem interesting to revisit SaN through the “how content that Raemon attributes to CFAR holds up to Said” lens)
c) Even if Kaj prefers you not to engage with them (or to engage only in particular ways), it would be fine under the meta-rules for you to start a separate post and/or discussion thread for the purpose of critiquing. I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.
I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts.
Sure.
I still think it’d be most useful if you came at it from the frame of “What product is Kaj trying to build, and if I think that product isn’t useful, are there different products that would better solve the problem that Kaj’s product is trying to solve?”
Sure, but what if (as seems likely enough) I think there aren’t any different products that better solve the problem…?
I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.
So, just as a general point (and this is related to the previous paragraph)…
The problem with the norm of writing critiques as separate posts, is that it biases (or, if you like, nudges) critiques toward the sort that constitute points or theses in their own right.
In other words, if you write a post, and I comment to say that your post is dumb and you are dumb for thinking and writing this and the whole thing is wrong and bad (except, you know, in a tactful way), well, that is, at least in some sense, appropriate (or we might say, it is relevant, in the Gricean sense); you wrote a post, I posted a comment about that post. Fine.
But if you write a post, and I write a post of my own the entirety of whose content and message is “post X which Raemon just wrote is wrong and bad etc.”, well, what is that? Who writes a whole post just to say that someone else is wrong? It seems… odd; and also, antagonistic, somehow. “What was the point of this post?”, commenters may inquire; “Surely you didn’t write a whole post just to say that another post is wrong? What’s your take, then? What Raemon said is wrong, but then what’s right?”—and what do I answer? “I have no idea what’s right, but that is wrong, and… that’s all I wanted to say, really.” As I said, this simply looks odd (socially speaking). (And certainly one is much less likely to get any traction or even engagement—except the dubious sort of engagement; the kind which is all meta, no substance.)
And the thing is, many of my critiques (of CFAR stuff, yes, and of many other things that are discussed in rationalist spaces) boil down to just “what you are saying is wrong”. If you ask me what I think the right answer is, in such cases, I will have nothing to offer you. I don’t know what the right answer is. I don’t think you know what the right answer is, either; I don’t think anyone has the right answer. Beyond saying that (hypothetical) you are wrong, I often really don’t have much to add.
But such criticisms are extremely important! Refraining from falsely believing ourselves to have the right answer, or even a good answer, or even “the best answer so far”, when what we actually have is simply wrong—this is extremely important! It is very tempting to think that we’ve found an answer, when we have not. Avoiding this trap is what allows us to keep looking, and eventually (one hopes!) find the actual right answer.
I understand that you are coming at this from a view in which an idea that someone proposes, a “framework”, etc., has value, and we take that idea and we build on it; or perhaps we say “but what about this instead”, and we offer our own idea or framework, and maybe we synthesize them, and together, cooperatively, we work toward the answer. Under that view, what you say makes sense.
My commentary (not quite an objection, really) is just that it’s crucial to instead be able to say “no, actually, that is simply wrong [because reasons X Y Z]”, and have that be the end of (that branch of) the conversation. You had an idea, that idea was wrong, end of story, back to the drawing board.
That having been said, I do find your response entirely reasonable and satisfactory, as far as this specific case goes; thank you. I will reread both your post and Kaj’s sequence, and comment on both (the latter, contingent on Kaj’s approval).
And the thing is, many of my critiques (of CFAR stuff, yes, and of many other things that are discussed in rationalist spaces) boil down to just “what you are saying is wrong”. If you ask me what I think the right answer is, in such cases, I will have nothing to offer you. I don’t know what the right answer is. I don’t think you know what the right answer is, either; I don’t think anyone has the right answer. Beyond saying that (hypothetical) you are wrong, I often really don’t have much to add.
If all you have to say is “this seems wrong”, that… basically just seems fine. [edit to clarify: I mean making a comment, not a post].
I don’t expect most LessWrong users would get annoyed at that. The specific complaint we’ve gotten about you has more to do with the way you Socratic-ly draw people into lengthy conversations that don’t acknowledge the difference in frame, and leave people feeling like it was a waste of time. (This has more to do with implicitly demanding asymmetric effort between you and the author, than about criticism).
I’m not quite sure what you’re saying. Yes, no doubt, no one’s complained about me doing the thing I described—because, obviously, I haven’t ever done it! You say that it “basically seems just fine”, but… I don’t expect that it would actually seem “just fine” if I (or anyone else) were to actually do it.
Of course, I could be wrong. What are three examples of posts that others have written, that boil down simply to “other post X, written by person Y, is wrong”, and which have gotten a good reception? Perhaps if we did a case study or three, we’d gain some more insight into this thing.
(As for the “specific complaint”—there I just don’t know what you mean. Feel free to elaborate, if you like.)
Slight clarification – I think I worded the previous comment confusingly. I meant to say, if the typical LessWrong user wrote a single comment in reply to a post saying “this seems wrong”, I would expect that to basically be fine.
I only recommend the “create a whole new post” thing when an author specifically asks you to stop commenting.
(In some cases I think creating a whole new post would actually be just fine, based on how I’ve seen, say, Eliezer, Robin Hanson, Zvi, Ben Hoffman and Sarah Constantin respond to each other in longform on occasion. In other cases creating a whole new post might go over less well, and/or might be a bit of an experiment rather than a tried-and-true-solution, but I think it’s the correct experiment to try)
Also want to be a clear—if authors are banning or asking lots of users to avoid criticism, I do think the author should take something of a social hit as “a person who can’t accept any criticism”. But I nonetheless think it’s still a better metanorm for established authors to have control over their post’s discussion area.
[The LessWrong team is currently trying to develop a much clearer understanding of what good moderation policies are, which might result in some of my opinions changing over the next few weeks, this is just a quick summary of what I currently believe]
Also want to be a clear—if authors are banning or asking lots of users to avoid criticism, I do think the author should take something of a social hit as “a person who can’t accept any criticism”. But I nonetheless think it’s still a better metanorm for established authors to have control over their post’s discussion area.
Quite. A suggestion, then, if I may: display “how many people has this person banned from their posts” (with, upon a click or mouseover or some such, the full list of users available, who have been thus banned) prominently, when viewing a person’s post (somewhere near the post’s author line, perhaps). This way, if I open a post by one Carol, say, I can see at once that she’s banned 12 people from her posts; I take note of this (as that is unusually many); I then click/mouseover/etc., and see either that all the banned accounts are known trolls and curmudgeons (and conclude that Carol is a sensible person with a low tolerance for low-grade nonsense), or that all the banned accounts are people I judge to be reasonable and polite (and conclude that Carol is a prima donna with a low tolerance for having her ideas challenged).
Something in that space seems basically reasonable. Note that I haven’t prioritized cleaning up (and then improving visibility for) the moderation log in part because the list of users who have ever banned users is actually just extremely short, and meanwhile there’s a lot of other site features that seem higher priority.
I have been revisiting it recently and think it’d be a good thing to include in the nearish future (esp. if I am prioritizing other features that’d make archipelago-norms more likely to actually get used), but for the immediate future I actually think just saying to the few people who’ve expressed concerns ‘yo, when you look at the moderation log almost nobody has used it’ is the right call given limited dev time.
I meant to say, if the typical LessWrong user wrote a single comment in reply to a post saying “this seems wrong”, I would expect that to basically be fine.
Ah, I see. Well, yes. But then, that’s also what I was saying: this sort of thing is generally fine as a comment, but as a post…
I only recommend the “create a whole new post” thing when an author specifically asks you to stop commenting.
I entirely understand your intention here, but consider: this would be even worse, “optics”-wise! “So,” thinks the reader, “this guy was so annoying, with his contrarian objections, that the victim of his nitpicking actually asked him to stop commenting; but he can’t let it go, so he wrote a whole post about it?!” And of course this is an uncharitable perspective, and one which isn’t consistent with “good truth-seeking norms”, etc. But… do you doubt that this is the sort of impression that will, if involuntarily, be formed in the minds of the commentariat?
3. Author says “this is annoying enough that I’d prefer you not to comment on my posts anymore.” [Hopefully, although not necessarily, the author does this knowing that they are basically opting into you now being encouraged by LessWrong moderators to post your criticism elsewhere if you think it’s important. This might not currently be communicated that well but I think it should be]
4. Then you go and write a post titled ‘My thoughts on X’ or ‘Alternative Conversation about X’ or whatever, that says ‘the author seems wrong / bad.’
By that point, sure it might be annoying, but it’s presumably an improvement from the author’s take. (I know that if I wanted to write a post about some high level Weird Introspection Stuff that took a bunch of Weird Introspection Paradigm stuff for granted, I’d personally probably be annoyed if you made the discussion about whether the Weird Introspection Paradigm was even any good, and much less annoyed if you wrote another post saying so.
I might be typical minding, but two important bits from my perspective are ‘getting to have the conversation that I actually wanted to have’, and ‘not being forced to provide my own platform for someone else who I don’t think is arguing in good faith’
Addenda: my Strategies of Personal Growth post is also particularly downstream of CFAR. (I realize that much of it is something you can elsewhere. My perspective is that the main product CFAR provides is a culture that makes it easier to orient this sort of thing, and stick with it. CFAR iterates on “what combination of techniques can you present to a person in 4 days that best help jump-start them into that culture?”, and they chose that feedback-loop-cycle after exploring others and finding them less effective)
One salient thing from the Strategies of Personal Growth perspective (which I attribute to exploration by CFAR researchers) is that many of the biggest improvements you can gain come from healing and removing psychological blockers.
“There is variation on what techniques are useful for what people, and Said’s mind is shaped such that the CFAR paradigm isn’t useful, and it will never be legible to Said that the CFAR paradigm is useful.”
This happens to be phrased such that it could be literally true, but the implication—that the CFAR paradigm is in fact useful (to some people), and that it could potentially be useful (to some people) but the fact of its usefulness could be illegible to me—cannot be true. (Or, to be more precisely, it cannot be true simultaneously with the claim being true that “the CFAR paradigm” constitutes CFAR succeeding at finding [at least part of] what they were looking for. Is this claim being made? It seems like it is—if not, that should be made clear!)
The reason is simple: the kind of thing that CFAR (claimed to have) set out to look for, is the kind of thing that should be quite legible even to very skeptical third parties. “We found what we were looking for, but you just can’t tell that we did” is manifestly an evasion.
The reason is simple: the kind of thing that CFAR (claimed to have) set out to look for, is the kind of thing that should be quite legible even to very skeptical third parties.
What is your current model of what CFAR “claimed to have set out to look for”? I don’t actually know much of an explicit statement of what CFAR was trying to look for, besides the basic concepts of “applied rationality”.
But while it may have been sensible to start (fully 10 years ago, now!)
Correction: CFAR was started in 2012 (though I believe some of the founders ran rationality camps the previous summer, in 2011), so it’s been 7 (or 8) years, not 10.
Less Wrong, however, was launched in 2009, and that is what I was referring to (namely, Eliezer’s posts about the community and Bayesian dojos and so forth).
Noting that I do agree with this particular claim.
I see the situation as:
There are, in fact, good reasons that its hard to communicate and demonstrate some things, and that hyperfocus on “what can be made legible to a third party” results in a lot of looking under street lamps, rather than where the value necessarily is. I have very different priors than Said on how suspicious CFAR’s actions are, as well as different firsthand experience that leads me to believe there’s a lot of value in CFAR’s work that Said presumably dismisses.
[this is not “zero suspicion”, but I bet my suspicion takes a very different shape than Said’s]
But, it’s still important for group rationality to have sound game theory re: what sort of ideas gain what sort of momentum. An important meta-agreement/policy is for people and organizations to be clear about the epistemic status of their ideas and positions.
I think it takes effort to maintain the right epistemic state, as groups and as individuals. So I think it would have been better if CFAR explicitly stated in their handbook, or in a public blogpost*, that “yes, this limits how much people should trust us, and right now we think it’s more important for us to focus on developing a good product than trying to make our current ideas legibly trustworthy.”
As habryka goes into here, I think there are some benefits for researchers to focus internally for a while. The benefits are high bandwidth communication, and being able to push ideas farther and faster than they would if they were documenting every possibly-dead-end-approach at every step of the way. But, after a few years of this, it’s important to write up your findings in a more public/legible way, both so that others can critique it and so that others can build on it.
CFAR seems overdue for this. But, also, CFAR has had lots of staff turnover by now and it’s not that useful to think in terms of “what CFAR ought to do” vs “what people who are invested in the CFAR paradigm should do.” (The Multi-Agent Models sequence is a good step here. I think good next steps would be someone writing up several other aspects of the CFAR paradigm with a similar degree of clarity, and good steps after that would be to think about what good critique/evaluation would look like)
I see this as needing a something of a two-way contract, where:
Private researchers credibly commit to doing more public writeups (even though it’s a lot of work that often won’t result in immediate benefit), and at the very least writing up quick, clear epistemic statuses of how people should relate to the research in the meanwhile.
Third party skeptics develop a better understanding of what sort of standards are reasonable, and “cutting researchers exactly the right amount of slack.” I think there’s good reason at this point to be like “geez, CFAR, can you actually write up your stuff and put reasonable disclaimers on things and not ride a wave of vague illegible endorsement?” But my impression is that even if CFAR did all the right things and checked all the right boxes, people would still be frustrated, because the domain CFAR is trying to excel at is in fact very difficult, and a rush towards demonstrability wouldn’t be useful. And I think good criticism needs to understand that.
*I’m not actually sure they haven’t made a public blogpost about this.
“It is very difficult to find a black cat in a dark room—especially when the cat is not there.”
I’ve quoted this saying before, more than once, but I think it’s very applicable to CFAR, and I think it is long past time we acknowledged this.
The point is this: yes, it is possible that what CFAR is trying to excel at is in fact very difficult. It could, indeed, be that CFAR’s techniques are genuinely difficult to teach—and forget about trying to teach them via (non-personally-customized!) text. It could be, yes, that CFAR’s progress in toward the problem they are attacking is, while quite real, nonetheless fiendishly difficult to demonstrate, in any sort of legible way. All of these things are possible.
But there is another possibility. It could be that what CFAR is trying to excel at is not very difficult, but impossible. And because of this, it could be that CFAR’s techniques are not hard to teach, but rather there is nothing there to be taught. It could be that CFAR’s progress toward the problem they are attacking, is not hard to demonstrate, but rather nonexistent.
What you say is not, as far as it goes, wrong. But it seems to be predicated on the notion that there is a cat in the room—hidden quite well, perhaps, but nonetheless findable; and also, secondarily, on the notion that what CFAR is doing to look for said cat is even approximately the sort of thing which has any chance at all of finding it.
But while it may have been sensible to start (fully 10 years ago, now!) with the assumption of an existing and definitely present cat, that was then; we’ve since had 10 years of searching, and no apparent progress. Now, it is a very large room, and the illumination is not at all bright, and we can expect no help from the cat, in our efforts to find it. So perhaps this is the sort of thing that takes 10 years to produce results, or 50, or 500—who knows! But the more time passes with no results, the more we should re-examine our initial assumption. It is even worth considering the possibility of calling off the search. Maybe the cat is just too well-hidden (at least, for now).
Yet CFAR does claim that they’ve found something, don’t they? Not the whole cat, perhaps, but its tail, at least. But they can’t show it to us, because, apparently—contrary to what we thought we were looking for, and expecting to find—it turns out that our ideas about what sort of thing a cat is, and what it even means to find a cat, and how ones knows when one has found it, were mistaken!
Well, I’ve worn out this analogy enough, so here’s my conclusion:
As far as I can tell, CFAR has found nothing (or as near to nothing as hardly matters). The sorts of things they seem to be claiming to have found (to the extent that they’re being at all clear) don’t really seem anything like the sorts of things they set out to look for. And that extent is not very large, because they’re being far, far more secretive than seems to me to be remotely warranted. And CFAR’s behavior in the past has not exactly predisposed me to believe that they are, or will be, honest in reporting their progress (or lack thereof).
If CFAR has found nothing, because the goal is ambitious and progress is difficult, they should say so. “We worked on figuring out how to make humans more rational for nearly a decade but so far, all we have is a catalog of a bunch of things that don’t really work”—that is understandable. CFAR’s actual behavior, not so much—or not, at least, without assuming dishonesty or other deficits of virtue.
P.S.:
This is rather suspicious all on its own.
On the “is there something worth teaching there” front, I think you’re just wrong, and obviously so from my perspective (since I have, in fact, learned things. Sunset at Noon is probably the best writeup of what CFAR-descended things I’ve learned and why they’re valuable to me).
This doesn’t mean you’re obligated to believe me. I put moderate probability on “There is variation on what techniques are useful for what people, and Said’s mind is shaped such that the CFAR paradigm isn’t useful, and it will never be legible to Said that the CFAR paradigm is useful.” But, enough words have been spent trying to demonstrate things to you that seem obvious to me that it doesn’t seem worth further time on it.
The Multi-Agent Model of Mind is the best current writeup of (one of) the important elements of what I think of as the CFAR paradigm. I think it’d be more useful for you to critique that than to continue this conversation.
I have read your post Sunset at Noon; I do not recall finding much to comment on at all, there (I didn’t really get the impression that it was meant to be a “writeup of … CFAR-descended things”!), but I will re-read it and get back to you.
As for the multi-agent model of mind, I have already critiqued it, though, admittedly, in a haphazard way—a comment here, a comment there… I have not bothered to critique more in-depth, or in a more targeted way, because… well, to be frank, because of the many times when I’ve attempted to really critique anything, on the new Less Wrong, and been rebuffed for being insufficiently “prosocial”, insufficiently “nice”, etc., etc.
Should I understand your suggestion to mean that if I post critiquing comments on some of the posts in the sequence you linked, they will not spawn lengthy threads about niceness, will not be met with moderator comments about whether I am inserting enough caveats about how of course I respect the OP very much, etc.?
I’m sorry if this seems blunt or harsh; I really don’t mean to be antagonistic toward you in particular (or anyone, really)! But if you reply to my critical comments by saying “but we have put forth our best effort; the ball’s in your court now” (an entirely fair response, if true!), then before I run with said ball, I need to know it’s not a waste of my time.
And, to be clear, if you respond with “no, all the politeness rules still apply, you have to stick to them if you want to critique these writeups [on Less Wrong]”, then—fair enough! But I’d like to know it in advance. (In such a case, I of course will not post any such critiques here; I may post them elsewhere, or not at all, as I find convenient.)
I think that your past criticisms have been useful, and I’ve explicitly tried to take them into account in the sequence. E.g. the way I defined subagents in the first post of the sequence, was IIRC in part copy-pasted from an earlier response to you, and it was your previous comment that helped/forced me to clarify what exactly I meant. I’d in fact been hoping to see more comments from you on the posts, and expect them to be useful regardless of the tone.
I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts. But:
a) if I had written the posts, I would see them as “yes, now these are actually at the stage where the sort of critique Said does is more relevant.” I still think it’d be most useful if you came at it from the frame of “What product is Kaj trying to build, and if I think that product isn’t useful, are there different products that would better solve the problem that Kaj’s product is trying to solve?”
b) relatedly, if you have criticism of the Sunset at Noon content I’d be interested in that. (this is not a general rule about whether I want critiques of that sort. Most of my work is downstream of CFAR paradigm stuff, and I don’t want most of my work to turn into a debate about CFAR. But it does seem interesting to revisit SaN through the “how content that Raemon attributes to CFAR holds up to Said” lens)
c) Even if Kaj prefers you not to engage with them (or to engage only in particular ways), it would be fine under the meta-rules for you to start a separate post and/or discussion thread for the purpose of critiquing. I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.
Sure.
Sure, but what if (as seems likely enough) I think there aren’t any different products that better solve the problem…?
So, just as a general point (and this is related to the previous paragraph)…
The problem with the norm of writing critiques as separate posts, is that it biases (or, if you like, nudges) critiques toward the sort that constitute points or theses in their own right.
In other words, if you write a post, and I comment to say that your post is dumb and you are dumb for thinking and writing this and the whole thing is wrong and bad (except, you know, in a tactful way), well, that is, at least in some sense, appropriate (or we might say, it is relevant, in the Gricean sense); you wrote a post, I posted a comment about that post. Fine.
But if you write a post, and I write a post of my own the entirety of whose content and message is “post X which Raemon just wrote is wrong and bad etc.”, well, what is that? Who writes a whole post just to say that someone else is wrong? It seems… odd; and also, antagonistic, somehow. “What was the point of this post?”, commenters may inquire; “Surely you didn’t write a whole post just to say that another post is wrong? What’s your take, then? What Raemon said is wrong, but then what’s right?”—and what do I answer? “I have no idea what’s right, but that is wrong, and… that’s all I wanted to say, really.” As I said, this simply looks odd (socially speaking). (And certainly one is much less likely to get any traction or even engagement—except the dubious sort of engagement; the kind which is all meta, no substance.)
And the thing is, many of my critiques (of CFAR stuff, yes, and of many other things that are discussed in rationalist spaces) boil down to just “what you are saying is wrong”. If you ask me what I think the right answer is, in such cases, I will have nothing to offer you. I don’t know what the right answer is. I don’t think you know what the right answer is, either; I don’t think anyone has the right answer. Beyond saying that (hypothetical) you are wrong, I often really don’t have much to add.
But such criticisms are extremely important! Refraining from falsely believing ourselves to have the right answer, or even a good answer, or even “the best answer so far”, when what we actually have is simply wrong—this is extremely important! It is very tempting to think that we’ve found an answer, when we have not. Avoiding this trap is what allows us to keep looking, and eventually (one hopes!) find the actual right answer.
I understand that you are coming at this from a view in which an idea that someone proposes, a “framework”, etc., has value, and we take that idea and we build on it; or perhaps we say “but what about this instead”, and we offer our own idea or framework, and maybe we synthesize them, and together, cooperatively, we work toward the answer. Under that view, what you say makes sense.
My commentary (not quite an objection, really) is just that it’s crucial to instead be able to say “no, actually, that is simply wrong [because reasons X Y Z]”, and have that be the end of (that branch of) the conversation. You had an idea, that idea was wrong, end of story, back to the drawing board.
That having been said, I do find your response entirely reasonable and satisfactory, as far as this specific case goes; thank you. I will reread both your post and Kaj’s sequence, and comment on both (the latter, contingent on Kaj’s approval).
If all you have to say is “this seems wrong”, that… basically just seems fine. [edit to clarify: I mean making a comment, not a post].
I don’t expect most LessWrong users would get annoyed at that. The specific complaint we’ve gotten about you has more to do with the way you Socratic-ly draw people into lengthy conversations that don’t acknowledge the difference in frame, and leave people feeling like it was a waste of time. (This has more to do with implicitly demanding asymmetric effort between you and the author, than about criticism).
I’m not quite sure what you’re saying. Yes, no doubt, no one’s complained about me doing the thing I described—because, obviously, I haven’t ever done it! You say that it “basically seems just fine”, but… I don’t expect that it would actually seem “just fine” if I (or anyone else) were to actually do it.
Of course, I could be wrong. What are three examples of posts that others have written, that boil down simply to “other post X, written by person Y, is wrong”, and which have gotten a good reception? Perhaps if we did a case study or three, we’d gain some more insight into this thing.
(As for the “specific complaint”—there I just don’t know what you mean. Feel free to elaborate, if you like.)
Slight clarification – I think I worded the previous comment confusingly. I meant to say, if the typical LessWrong user wrote a single comment in reply to a post saying “this seems wrong”, I would expect that to basically be fine.
I only recommend the “create a whole new post” thing when an author specifically asks you to stop commenting.
(In some cases I think creating a whole new post would actually be just fine, based on how I’ve seen, say, Eliezer, Robin Hanson, Zvi, Ben Hoffman and Sarah Constantin respond to each other in longform on occasion. In other cases creating a whole new post might go over less well, and/or might be a bit of an experiment rather than a tried-and-true-solution, but I think it’s the correct experiment to try)
Also want to be a clear—if authors are banning or asking lots of users to avoid criticism, I do think the author should take something of a social hit as “a person who can’t accept any criticism”. But I nonetheless think it’s still a better metanorm for established authors to have control over their post’s discussion area.
[The LessWrong team is currently trying to develop a much clearer understanding of what good moderation policies are, which might result in some of my opinions changing over the next few weeks, this is just a quick summary of what I currently believe]
Quite. A suggestion, then, if I may: display “how many people has this person banned from their posts” (with, upon a click or mouseover or some such, the full list of users available, who have been thus banned) prominently, when viewing a person’s post (somewhere near the post’s author line, perhaps). This way, if I open a post by one Carol, say, I can see at once that she’s banned 12 people from her posts; I take note of this (as that is unusually many); I then click/mouseover/etc., and see either that all the banned accounts are known trolls and curmudgeons (and conclude that Carol is a sensible person with a low tolerance for low-grade nonsense), or that all the banned accounts are people I judge to be reasonable and polite (and conclude that Carol is a prima donna with a low tolerance for having her ideas challenged).
Something in that space seems basically reasonable. Note that I haven’t prioritized cleaning up (and then improving visibility for) the moderation log in part because the list of users who have ever banned users is actually just extremely short, and meanwhile there’s a lot of other site features that seem higher priority.
I have been revisiting it recently and think it’d be a good thing to include in the nearish future (esp. if I am prioritizing other features that’d make archipelago-norms more likely to actually get used), but for the immediate future I actually think just saying to the few people who’ve expressed concerns ‘yo, when you look at the moderation log almost nobody has used it’ is the right call given limited dev time.
Ah, I see. Well, yes. But then, that’s also what I was saying: this sort of thing is generally fine as a comment, but as a post…
I entirely understand your intention here, but consider: this would be even worse, “optics”-wise! “So,” thinks the reader, “this guy was so annoying, with his contrarian objections, that the victim of his nitpicking actually asked him to stop commenting; but he can’t let it go, so he wrote a whole post about it?!” And of course this is an uncharitable perspective, and one which isn’t consistent with “good truth-seeking norms”, etc. But… do you doubt that this is the sort of impression that will, if involuntarily, be formed in the minds of the commentariat?
I’m fairly uncertain here. But I don’t currently share the intuition.
Note that the order of events I’m suggesting is:
1. Author posts.
2. Commenter says “this seems wrong / bad”. Disagreement ensues
3. Author says “this is annoying enough that I’d prefer you not to comment on my posts anymore.” [Hopefully, although not necessarily, the author does this knowing that they are basically opting into you now being encouraged by LessWrong moderators to post your criticism elsewhere if you think it’s important. This might not currently be communicated that well but I think it should be]
4. Then you go and write a post titled ‘My thoughts on X’ or ‘Alternative Conversation about X’ or whatever, that says ‘the author seems wrong / bad.’
By that point, sure it might be annoying, but it’s presumably an improvement from the author’s take. (I know that if I wanted to write a post about some high level Weird Introspection Stuff that took a bunch of Weird Introspection Paradigm stuff for granted, I’d personally probably be annoyed if you made the discussion about whether the Weird Introspection Paradigm was even any good, and much less annoyed if you wrote another post saying so.
I might be typical minding, but two important bits from my perspective are ‘getting to have the conversation that I actually wanted to have’, and ‘not being forced to provide my own platform for someone else who I don’t think is arguing in good faith’
Addenda: my Strategies of Personal Growth post is also particularly downstream of CFAR. (I realize that much of it is something you can elsewhere. My perspective is that the main product CFAR provides is a culture that makes it easier to orient this sort of thing, and stick with it. CFAR iterates on “what combination of techniques can you present to a person in 4 days that best help jump-start them into that culture?”, and they chose that feedback-loop-cycle after exploring others and finding them less effective)
One salient thing from the Strategies of Personal Growth perspective (which I attribute to exploration by CFAR researchers) is that many of the biggest improvements you can gain come from healing and removing psychological blockers.
Separately from my other comment, I will say—
This happens to be phrased such that it could be literally true, but the implication—that the CFAR paradigm is in fact useful (to some people), and that it could potentially be useful (to some people) but the fact of its usefulness could be illegible to me—cannot be true. (Or, to be more precisely, it cannot be true simultaneously with the claim being true that “the CFAR paradigm” constitutes CFAR succeeding at finding [at least part of] what they were looking for. Is this claim being made? It seems like it is—if not, that should be made clear!)
The reason is simple: the kind of thing that CFAR (claimed to have) set out to look for, is the kind of thing that should be quite legible even to very skeptical third parties. “We found what we were looking for, but you just can’t tell that we did” is manifestly an evasion.
What is your current model of what CFAR “claimed to have set out to look for”? I don’t actually know much of an explicit statement of what CFAR was trying to look for, besides the basic concepts of “applied rationality”.
Correction: CFAR was started in 2012 (though I believe some of the founders ran rationality camps the previous summer, in 2011), so it’s been 7 (or 8) years, not 10.
Less Wrong, however, was launched in 2009, and that is what I was referring to (namely, Eliezer’s posts about the community and Bayesian dojos and so forth).