Said: I absolutely am not doing that. It makes no sense to say this!
Yeah I agree this phrasing didn’t capture your take correctly, and I do recall explicit comments about that in this thread, sorry.
I do claim your approach is in practice often anti-conducive to people doing early stage research. You’ve stated a willingness (I think eagerness?) to drive people away and cause fewer posts from people who I think are actually promising.
But I notice that you don’t instead (or even in addition) say anything like “people … who have a clear and impressive track record of successfully teaching rationality”. Of course this could be a simple omission, so I’ll ask explicitly: do you think that the people in question have such a track record?
My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes). But I think this is a process you should naturally expect to take 4-20 years.
Meanwhile, there are many steps along the way that don’t “produce a cake a skeptical third party can eat”, but if you’re actually involved and paying attention, like, clearly are having an effect that is relevant, and is at least an indication that you’re on a promising path worth experimenting more with. I observe the people practicing various CFAR and Leverage techniques seem to have a good combination of habits that makes it easier to have difficult conversations in domains with poor feedback loops. The people doing the teaching have hundreds of hours of practice trying to teach skills, seeing mistakes people make along the way, and see them making fewer mistakes and actually grokking the skill.
Some of the people involved do feel a bit like they’re making some stuff up and coasting on CFAR’s position in the ecosystem, but other seem like they’re legitimately embarking on longterm research projects, tracking their progress in ways that make sense, looking for the best feedback loops they can find, etc.
Anecdata: I talked a bunch with a colleague who I respect a lot in 2014, who seemed much smarter -. We parted ways for 3 years. Later, I met him again, we talked a bunch over the course of a month, and he said “hey, man, you seem smarter than you did 3 years ago.” I said “oh, huh, yeah I thought so too and, like, had worked to become smarter on purpose, but I wasn’t sure whether it worked.”
Nowadays, when I observe people as they do their thinking, I notice tools they’re not using, mistakes they’re making, suggest fixes, and it seems like they in fact do better thinking.
I think it’s reasonable to not believe me (that the effect is significant, or that it’s CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don’t think you’re very good at it, and I’m not very interested in satisfying your particular brand of skepticism.
My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes).
An arbitrary skeptic is perhaps too high a bar, but what about a reasonable skeptic? I think that, from that perspective (and especially given the “outside view” on similar things attempted in the past), if you don’t have “a reliable training program that demonstrably improves quantifiable real world successes”, you basically just don’t have anything. If someone asks you “do you have anything to show for all of this”, and all you’ve got is what you’ve got, then… well, I think that I’m not showing any even slightly unreasonable skepticism, here.
But I think this is a process you should naturally expect to take 4-20 years.
Well, CFAR was founded 11 years ago. That’s well within the “4–20” range. Are you saying that it’s still too early to see clear results?
Is there any reason to believe that there will be anything like “a reliable training program that demonstrably improves quantifiable real world successes” in five years (assuming AI doesn’t kill us all or what have you)? Has there been any progress? (On evaluation methods, even?) Is CFAR even measuring progress, or attempting to measure progress, or… what?
Meanwhile, there are many steps along the way … Anecdata …
But you see how these paragraphs are pretty unconvincing, though, right? Like, at the very least, even if you are indeed seeing all these things you describe, and even if they’re real things, you surely can see how there’s… basically no way for me, or anyone else who isn’t hanging out with you and your in-person acquaintances on a regular basis, to see or know or verify any of this?
I think it’s reasonable to not believe me (that the effect is significant, or that it’s CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don’t think you’re very good at it, and I’m not very interested in satisfying your particular brand of skepticism.
Hold on—you’ve lost track of the meta-level point.
The question isn’t whether it’s valuable to poke at these specific things, or whether I’m good at poking at these specific things.
I do sometimes think [Said] successfully points out “emperor has no clothes”. Or, more commonly/accurately, “the undergrad has no thesis.” In some cases his socratic questioning seems like an actually-appropriate relationship between an adjunct professor, and an undergrad who shows up to his philosophy class writing an impassioned manifesto that doesn’t actually make sense and is riddled with philosophical holes. I don’t super mind when Said plays this role, but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality (which absolutely requires being able to model people’s minds, what mistakes they’re likely to be making, what thought processes tend to lead to significant breakthroughs)
Which I summarized/interpreted as:
… the implication seems to be something like “clearly such people know what they’re talking about, and deserve the presumption of such, and therefore it’s epistemically and/or socially inappropriate to treat them as though their ideas might be bullshit the equivalent of an eager undergrad’s philosophy manifesto”.
(You didn’t object to that interpretation, so I’m assuming for now that it’s basically correct.)
But the problem is that it’s not clear that the people in question know what they’re talking about. Maybe they do! But it’s certainly not clear, and indeed there’s really no way for me (or any other person outside your social circle) to know that, nor is there any kind of evidence for it, other than personal testimony/anecdata, which is not worth much.
So it doesn’t make sense to suggest that we (the commentariat of Less Wrong) must, or should, treat such folks any differently from anyone else, such as, say, me. There’s no basis for it. From my epistemic position—which, it seems to me, is an eminently reasonable one—these are people who may have good ideas, or they may have bad ideas; they may know what they’re talking about, or may be spouting the most egregious nonsense; I really don’t have any reason to presume one or the other, no more than they have any reason to presume this of me. (Of course we can judge one another by things like public writings, etc., but in this, the people you refer to are no different from any other Less Wrong participant, including wholly anonymous or pseudonymous ones.)
And that, in turn, means that when you say:
… but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality
… there is actually no good reason at all why that should mean anything or carry any weight in any kind of decision or evaluation.
(There are bad reasons, of course. But we may take it as given that you are not swayed by any such.)
Yeah I agree this phrasing didn’t capture your take correctly, and I do recall explicit comments about that in this thread, sorry.
I do claim your approach is in practice often anti-conducive to people doing early stage research. You’ve stated a willingness (I think eagerness?) to drive people away and cause fewer posts from people who I think are actually promising.
My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes). But I think this is a process you should naturally expect to take 4-20 years.
Meanwhile, there are many steps along the way that don’t “produce a cake a skeptical third party can eat”, but if you’re actually involved and paying attention, like, clearly are having an effect that is relevant, and is at least an indication that you’re on a promising path worth experimenting more with. I observe the people practicing various CFAR and Leverage techniques seem to have a good combination of habits that makes it easier to have difficult conversations in domains with poor feedback loops. The people doing the teaching have hundreds of hours of practice trying to teach skills, seeing mistakes people make along the way, and see them making fewer mistakes and actually grokking the skill.
Some of the people involved do feel a bit like they’re making some stuff up and coasting on CFAR’s position in the ecosystem, but other seem like they’re legitimately embarking on longterm research projects, tracking their progress in ways that make sense, looking for the best feedback loops they can find, etc.
Anecdata: I talked a bunch with a colleague who I respect a lot in 2014, who seemed much smarter -. We parted ways for 3 years. Later, I met him again, we talked a bunch over the course of a month, and he said “hey, man, you seem smarter than you did 3 years ago.” I said “oh, huh, yeah I thought so too and, like, had worked to become smarter on purpose, but I wasn’t sure whether it worked.”
Nowadays, when I observe people as they do their thinking, I notice tools they’re not using, mistakes they’re making, suggest fixes, and it seems like they in fact do better thinking.
I think it’s reasonable to not believe me (that the effect is significant, or that it’s CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don’t think you’re very good at it, and I’m not very interested in satisfying your particular brand of skepticism.
An arbitrary skeptic is perhaps too high a bar, but what about a reasonable skeptic? I think that, from that perspective (and especially given the “outside view” on similar things attempted in the past), if you don’t have “a reliable training program that demonstrably improves quantifiable real world successes”, you basically just don’t have anything. If someone asks you “do you have anything to show for all of this”, and all you’ve got is what you’ve got, then… well, I think that I’m not showing any even slightly unreasonable skepticism, here.
Well, CFAR was founded 11 years ago. That’s well within the “4–20” range. Are you saying that it’s still too early to see clear results?
Is there any reason to believe that there will be anything like “a reliable training program that demonstrably improves quantifiable real world successes” in five years (assuming AI doesn’t kill us all or what have you)? Has there been any progress? (On evaluation methods, even?) Is CFAR even measuring progress, or attempting to measure progress, or… what?
But you see how these paragraphs are pretty unconvincing, though, right? Like, at the very least, even if you are indeed seeing all these things you describe, and even if they’re real things, you surely can see how there’s… basically no way for me, or anyone else who isn’t hanging out with you and your in-person acquaintances on a regular basis, to see or know or verify any of this?
Hold on—you’ve lost track of the meta-level point.
The question isn’t whether it’s valuable to poke at these specific things, or whether I’m good at poking at these specific things.
Here’s what you wrote earlier:
Which I summarized/interpreted as:
(You didn’t object to that interpretation, so I’m assuming for now that it’s basically correct.)
But the problem is that it’s not clear that the people in question know what they’re talking about. Maybe they do! But it’s certainly not clear, and indeed there’s really no way for me (or any other person outside your social circle) to know that, nor is there any kind of evidence for it, other than personal testimony/anecdata, which is not worth much.
So it doesn’t make sense to suggest that we (the commentariat of Less Wrong) must, or should, treat such folks any differently from anyone else, such as, say, me. There’s no basis for it. From my epistemic position—which, it seems to me, is an eminently reasonable one—these are people who may have good ideas, or they may have bad ideas; they may know what they’re talking about, or may be spouting the most egregious nonsense; I really don’t have any reason to presume one or the other, no more than they have any reason to presume this of me. (Of course we can judge one another by things like public writings, etc., but in this, the people you refer to are no different from any other Less Wrong participant, including wholly anonymous or pseudonymous ones.)
And that, in turn, means that when you say:
… there is actually no good reason at all why that should mean anything or carry any weight in any kind of decision or evaluation.
(There are bad reasons, of course. But we may take it as given that you are not swayed by any such.)