This… still misconstrues my views, in quite substantive and important ways. Very frustrating.
You write:
Said is holding LessWrong to the standard of a final-publication-journal, when the thing I think LessWrong needs to be includes many stages before that, when you see the messy process that actually generated those final ideas.
I absolutely am not doing that. It makes no sense to say this! It would be like saying “this user test that you’re doing with our wireframe is holding the app we’re developing to the standard of a final-release product”. It’s simply a complete confusion about what testing is even for. The whole point of doing the user test now is that it is just a wireframe, not even a prototype or an alpha version, so getting as much information as possible now is extremely helpful! Nobody’s saying that you have to throw out the whole project and fire everyone involved into the sun the moment you get a single piece of negative user feedback; but if you don’t subject the thing to testing, you’re losing out on a critical opportunity to improve, to correct course… heck, to just plain learn something new! (And for all you know, the test might have a surprisingly positive result! Maybe some minor feature or little widget, which your designers threw in on a lark, elicits an effusive response from your test users, and clues you in to a highly fruitful design approach which you wouldn’t’ve thought worth pursuing. But you’ll never learn that if you don’t test!)
It feels to me like I’ve explained this… maybe as many as a dozen times in this post’s comment section alone. (I haven’t counted. Probably it’s not quite that many. But several, at least!)
I have to ask: is that you read my explanations but found them unconvincing, and concluded that “oh sure, Said says he believes so-and-so, but I don’t find his actions consistent with those purported beliefs, despite his explicit explanations of why they are consistent with them”?
If so, then the follow-up question is: why do you think that?
I don’t super mind when Said plays this role, but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality (which absolutely requires being able to model people’s minds, what mistakes they’re likely to be making, what thought processes tend to lead to significant breakthroughs)
What jumps out at me immediately, in this description, is that you describe the people in question as having put a lot of time into studying how to teach rationality. (This, you imply, allows us to assume certain qualifications or qualities on these individuals’ parts, from which we may further conclude… well, you don’t say it explicitly, but the implication seems to be something like “clearly such people know what they’re talking about, and deserve the presumption of such, and therefore it’s epistemically and/or socially inappropriate to treat them as though their ideas might be bullshit the equivalent of an eager undergrad’s philosophy manifesto”.)
But I notice that you don’t instead (or even in addition) say anything like “people … who have a clear and impressive track record of successfully teaching rationality”.
Of course this could be a simple omission, so I’ll ask explicitly: do you think that the people in question have such a track record?
If you do, and if they do, then of course that’s the relevant fact. And then at least part of the reply to my (perhaps at least seemingly) skeptical questioning (maybe after you give some answer to a question, but I’m not buying it, or ask follow-ups, etc.) might be “well, Said, here’s my track record; that’s who I am; and when I say it’s like this, you can disbelieve my explanations, but my claims are borne out in what I’ve demonstrably done”.
Now, it’s entirely possible that some people might find such a reply unconvincing, in any given case. Being an expert at something doesn’t make you omniscient, even on one subject! But it’s definitely the sort of response which buys you a good bit of indulgence from skepticism, so to speak, about claims for which you cannot (or don’t care to) provide legible evidence, on the spot and at the moment.
But (as I’ve noted before, though I can’t seem to find the comment in question, right now), those sorts of unambiguous qualifications tend to be mostly or entirely absent, in such cases.
And in the absence of such qualifications, but in the presence of claims like those about “circling” and other such things, it is not less but rather more appropriate to apply the at-least-potentially-skeptical, exploratory, questioning approach. It is not less but rather more important to “poke” at ideas, in ways that may be expected to reveal interesting and productive strengths if the ideas are strong, but to reveal weaknesses if the ideas are weak. It is not less but rather more important not to suppress all but those comments which take the “non-bullshit” nature of the claims for granted.
Said: I absolutely am not doing that. It makes no sense to say this!
Yeah I agree this phrasing didn’t capture your take correctly, and I do recall explicit comments about that in this thread, sorry.
I do claim your approach is in practice often anti-conducive to people doing early stage research. You’ve stated a willingness (I think eagerness?) to drive people away and cause fewer posts from people who I think are actually promising.
But I notice that you don’t instead (or even in addition) say anything like “people … who have a clear and impressive track record of successfully teaching rationality”. Of course this could be a simple omission, so I’ll ask explicitly: do you think that the people in question have such a track record?
My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes). But I think this is a process you should naturally expect to take 4-20 years.
Meanwhile, there are many steps along the way that don’t “produce a cake a skeptical third party can eat”, but if you’re actually involved and paying attention, like, clearly are having an effect that is relevant, and is at least an indication that you’re on a promising path worth experimenting more with. I observe the people practicing various CFAR and Leverage techniques seem to have a good combination of habits that makes it easier to have difficult conversations in domains with poor feedback loops. The people doing the teaching have hundreds of hours of practice trying to teach skills, seeing mistakes people make along the way, and see them making fewer mistakes and actually grokking the skill.
Some of the people involved do feel a bit like they’re making some stuff up and coasting on CFAR’s position in the ecosystem, but other seem like they’re legitimately embarking on longterm research projects, tracking their progress in ways that make sense, looking for the best feedback loops they can find, etc.
Anecdata: I talked a bunch with a colleague who I respect a lot in 2014, who seemed much smarter -. We parted ways for 3 years. Later, I met him again, we talked a bunch over the course of a month, and he said “hey, man, you seem smarter than you did 3 years ago.” I said “oh, huh, yeah I thought so too and, like, had worked to become smarter on purpose, but I wasn’t sure whether it worked.”
Nowadays, when I observe people as they do their thinking, I notice tools they’re not using, mistakes they’re making, suggest fixes, and it seems like they in fact do better thinking.
I think it’s reasonable to not believe me (that the effect is significant, or that it’s CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don’t think you’re very good at it, and I’m not very interested in satisfying your particular brand of skepticism.
My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes).
An arbitrary skeptic is perhaps too high a bar, but what about a reasonable skeptic? I think that, from that perspective (and especially given the “outside view” on similar things attempted in the past), if you don’t have “a reliable training program that demonstrably improves quantifiable real world successes”, you basically just don’t have anything. If someone asks you “do you have anything to show for all of this”, and all you’ve got is what you’ve got, then… well, I think that I’m not showing any even slightly unreasonable skepticism, here.
But I think this is a process you should naturally expect to take 4-20 years.
Well, CFAR was founded 11 years ago. That’s well within the “4–20” range. Are you saying that it’s still too early to see clear results?
Is there any reason to believe that there will be anything like “a reliable training program that demonstrably improves quantifiable real world successes” in five years (assuming AI doesn’t kill us all or what have you)? Has there been any progress? (On evaluation methods, even?) Is CFAR even measuring progress, or attempting to measure progress, or… what?
Meanwhile, there are many steps along the way … Anecdata …
But you see how these paragraphs are pretty unconvincing, though, right? Like, at the very least, even if you are indeed seeing all these things you describe, and even if they’re real things, you surely can see how there’s… basically no way for me, or anyone else who isn’t hanging out with you and your in-person acquaintances on a regular basis, to see or know or verify any of this?
I think it’s reasonable to not believe me (that the effect is significant, or that it’s CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don’t think you’re very good at it, and I’m not very interested in satisfying your particular brand of skepticism.
Hold on—you’ve lost track of the meta-level point.
The question isn’t whether it’s valuable to poke at these specific things, or whether I’m good at poking at these specific things.
I do sometimes think [Said] successfully points out “emperor has no clothes”. Or, more commonly/accurately, “the undergrad has no thesis.” In some cases his socratic questioning seems like an actually-appropriate relationship between an adjunct professor, and an undergrad who shows up to his philosophy class writing an impassioned manifesto that doesn’t actually make sense and is riddled with philosophical holes. I don’t super mind when Said plays this role, but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality (which absolutely requires being able to model people’s minds, what mistakes they’re likely to be making, what thought processes tend to lead to significant breakthroughs)
Which I summarized/interpreted as:
… the implication seems to be something like “clearly such people know what they’re talking about, and deserve the presumption of such, and therefore it’s epistemically and/or socially inappropriate to treat them as though their ideas might be bullshit the equivalent of an eager undergrad’s philosophy manifesto”.
(You didn’t object to that interpretation, so I’m assuming for now that it’s basically correct.)
But the problem is that it’s not clear that the people in question know what they’re talking about. Maybe they do! But it’s certainly not clear, and indeed there’s really no way for me (or any other person outside your social circle) to know that, nor is there any kind of evidence for it, other than personal testimony/anecdata, which is not worth much.
So it doesn’t make sense to suggest that we (the commentariat of Less Wrong) must, or should, treat such folks any differently from anyone else, such as, say, me. There’s no basis for it. From my epistemic position—which, it seems to me, is an eminently reasonable one—these are people who may have good ideas, or they may have bad ideas; they may know what they’re talking about, or may be spouting the most egregious nonsense; I really don’t have any reason to presume one or the other, no more than they have any reason to presume this of me. (Of course we can judge one another by things like public writings, etc., but in this, the people you refer to are no different from any other Less Wrong participant, including wholly anonymous or pseudonymous ones.)
And that, in turn, means that when you say:
… but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality
… there is actually no good reason at all why that should mean anything or carry any weight in any kind of decision or evaluation.
(There are bad reasons, of course. But we may take it as given that you are not swayed by any such.)
This… still misconstrues my views, in quite substantive and important ways. Very frustrating.
You write:
I absolutely am not doing that. It makes no sense to say this! It would be like saying “this user test that you’re doing with our wireframe is holding the app we’re developing to the standard of a final-release product”. It’s simply a complete confusion about what testing is even for. The whole point of doing the user test now is that it is just a wireframe, not even a prototype or an alpha version, so getting as much information as possible now is extremely helpful! Nobody’s saying that you have to throw out the whole project and fire everyone involved into the sun the moment you get a single piece of negative user feedback; but if you don’t subject the thing to testing, you’re losing out on a critical opportunity to improve, to correct course… heck, to just plain learn something new! (And for all you know, the test might have a surprisingly positive result! Maybe some minor feature or little widget, which your designers threw in on a lark, elicits an effusive response from your test users, and clues you in to a highly fruitful design approach which you wouldn’t’ve thought worth pursuing. But you’ll never learn that if you don’t test!)
It feels to me like I’ve explained this… maybe as many as a dozen times in this post’s comment section alone. (I haven’t counted. Probably it’s not quite that many. But several, at least!)
I have to ask: is that you read my explanations but found them unconvincing, and concluded that “oh sure, Said says he believes so-and-so, but I don’t find his actions consistent with those purported beliefs, despite his explicit explanations of why they are consistent with them”?
If so, then the follow-up question is: why do you think that?
What jumps out at me immediately, in this description, is that you describe the people in question as having put a lot of time into studying how to teach rationality. (This, you imply, allows us to assume certain qualifications or qualities on these individuals’ parts, from which we may further conclude… well, you don’t say it explicitly, but the implication seems to be something like “clearly such people know what they’re talking about, and deserve the presumption of such, and therefore it’s epistemically and/or socially inappropriate to treat them as though their ideas might be
bullshitthe equivalent of an eager undergrad’s philosophy manifesto”.)But I notice that you don’t instead (or even in addition) say anything like “people … who have a clear and impressive track record of successfully teaching rationality”.
Of course this could be a simple omission, so I’ll ask explicitly: do you think that the people in question have such a track record?
If you do, and if they do, then of course that’s the relevant fact. And then at least part of the reply to my (perhaps at least seemingly) skeptical questioning (maybe after you give some answer to a question, but I’m not buying it, or ask follow-ups, etc.) might be “well, Said, here’s my track record; that’s who I am; and when I say it’s like this, you can disbelieve my explanations, but my claims are borne out in what I’ve demonstrably done”.
Now, it’s entirely possible that some people might find such a reply unconvincing, in any given case. Being an expert at something doesn’t make you omniscient, even on one subject! But it’s definitely the sort of response which buys you a good bit of indulgence from skepticism, so to speak, about claims for which you cannot (or don’t care to) provide legible evidence, on the spot and at the moment.
But (as I’ve noted before, though I can’t seem to find the comment in question, right now), those sorts of unambiguous qualifications tend to be mostly or entirely absent, in such cases.
And in the absence of such qualifications, but in the presence of claims like those about “circling” and other such things, it is not less but rather more appropriate to apply the at-least-potentially-skeptical, exploratory, questioning approach. It is not less but rather more important to “poke” at ideas, in ways that may be expected to reveal interesting and productive strengths if the ideas are strong, but to reveal weaknesses if the ideas are weak. It is not less but rather more important not to suppress all but those comments which take the “non-bullshit” nature of the claims for granted.
(EDIT: Clarified follow-up question)
Yeah I agree this phrasing didn’t capture your take correctly, and I do recall explicit comments about that in this thread, sorry.
I do claim your approach is in practice often anti-conducive to people doing early stage research. You’ve stated a willingness (I think eagerness?) to drive people away and cause fewer posts from people who I think are actually promising.
My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes). But I think this is a process you should naturally expect to take 4-20 years.
Meanwhile, there are many steps along the way that don’t “produce a cake a skeptical third party can eat”, but if you’re actually involved and paying attention, like, clearly are having an effect that is relevant, and is at least an indication that you’re on a promising path worth experimenting more with. I observe the people practicing various CFAR and Leverage techniques seem to have a good combination of habits that makes it easier to have difficult conversations in domains with poor feedback loops. The people doing the teaching have hundreds of hours of practice trying to teach skills, seeing mistakes people make along the way, and see them making fewer mistakes and actually grokking the skill.
Some of the people involved do feel a bit like they’re making some stuff up and coasting on CFAR’s position in the ecosystem, but other seem like they’re legitimately embarking on longterm research projects, tracking their progress in ways that make sense, looking for the best feedback loops they can find, etc.
Anecdata: I talked a bunch with a colleague who I respect a lot in 2014, who seemed much smarter -. We parted ways for 3 years. Later, I met him again, we talked a bunch over the course of a month, and he said “hey, man, you seem smarter than you did 3 years ago.” I said “oh, huh, yeah I thought so too and, like, had worked to become smarter on purpose, but I wasn’t sure whether it worked.”
Nowadays, when I observe people as they do their thinking, I notice tools they’re not using, mistakes they’re making, suggest fixes, and it seems like they in fact do better thinking.
I think it’s reasonable to not believe me (that the effect is significant, or that it’s CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don’t think you’re very good at it, and I’m not very interested in satisfying your particular brand of skepticism.
An arbitrary skeptic is perhaps too high a bar, but what about a reasonable skeptic? I think that, from that perspective (and especially given the “outside view” on similar things attempted in the past), if you don’t have “a reliable training program that demonstrably improves quantifiable real world successes”, you basically just don’t have anything. If someone asks you “do you have anything to show for all of this”, and all you’ve got is what you’ve got, then… well, I think that I’m not showing any even slightly unreasonable skepticism, here.
Well, CFAR was founded 11 years ago. That’s well within the “4–20” range. Are you saying that it’s still too early to see clear results?
Is there any reason to believe that there will be anything like “a reliable training program that demonstrably improves quantifiable real world successes” in five years (assuming AI doesn’t kill us all or what have you)? Has there been any progress? (On evaluation methods, even?) Is CFAR even measuring progress, or attempting to measure progress, or… what?
But you see how these paragraphs are pretty unconvincing, though, right? Like, at the very least, even if you are indeed seeing all these things you describe, and even if they’re real things, you surely can see how there’s… basically no way for me, or anyone else who isn’t hanging out with you and your in-person acquaintances on a regular basis, to see or know or verify any of this?
Hold on—you’ve lost track of the meta-level point.
The question isn’t whether it’s valuable to poke at these specific things, or whether I’m good at poking at these specific things.
Here’s what you wrote earlier:
Which I summarized/interpreted as:
(You didn’t object to that interpretation, so I’m assuming for now that it’s basically correct.)
But the problem is that it’s not clear that the people in question know what they’re talking about. Maybe they do! But it’s certainly not clear, and indeed there’s really no way for me (or any other person outside your social circle) to know that, nor is there any kind of evidence for it, other than personal testimony/anecdata, which is not worth much.
So it doesn’t make sense to suggest that we (the commentariat of Less Wrong) must, or should, treat such folks any differently from anyone else, such as, say, me. There’s no basis for it. From my epistemic position—which, it seems to me, is an eminently reasonable one—these are people who may have good ideas, or they may have bad ideas; they may know what they’re talking about, or may be spouting the most egregious nonsense; I really don’t have any reason to presume one or the other, no more than they have any reason to presume this of me. (Of course we can judge one another by things like public writings, etc., but in this, the people you refer to are no different from any other Less Wrong participant, including wholly anonymous or pseudonymous ones.)
And that, in turn, means that when you say:
… there is actually no good reason at all why that should mean anything or carry any weight in any kind of decision or evaluation.
(There are bad reasons, of course. But we may take it as given that you are not swayed by any such.)