I agree that in general, rationalists have a valuable package of insights that isn’t found elsewhere, but “this package is deep enough and important enough to necessitate working directly with experts on an ongoing basis” is a very high bar. A lot of the relevant knowledge in x-risk and rationality can be obtained more cheaply by reading LW and papers, visiting an event or workshop a few times a year, etc. I agree that there are probably some subsets of x-risk and rationality that are such that rationalists happen to hold the best knowledge about it, and that if those are the things you’re the most interested in, then it might pay to work with/be mentored by rationalists in particular. But it looks to me like there are plausibly also large subsets of both x-risk and rationality for which the best knowledge is found elsewhere, and for a person interested in those subsets in particular, it’s enough to extract the other rationalist insights by shallower means than constant interaction.
For x-risk: there are many fields in which scholarship and amount of experience within a field does not actually develop real expertise: “experts” perform little better than novices. “X-risk in general” looks exactly like the kind of field where this applies, as it’s much closer to the “bad performance” than “good performance” column of Shanteau’s table of which domains allow for expertise. In particular, developing expertise requires repeated objective feedback so as to allow you to revise your predictions and models: for x-risk as well as other similar domains (intelligence analysis etc.), such feedback is missing and mostly subjective when it exists. So the default assumption should be that people who have devoted a lot of time studying x-risk in general are not going to be much better than people who only have limited exposure.
Now if you move to some specific subset of x-risk work, such as AI risk or biosecurity, you might get better feedback loops. I talked about this in a presentation at GoCAS that I held a couple of years ago, and suggested that something like this is the way to go. Worth noting that despite spending two months hanging out with x-risk scholars in person there, I didn’t feel like I’d have gotten a lot of valuable new information about x-risk in general.
But then the question is “does some community have strong feedback loops about this subfield of x-risk”, not “is the community rationalist”. It does seem plausible to me that with regard to at least some subsets of AI risk work in particular, the people who are most tightly embedded with useful work also tend to be rationalists to at least some extent. On the other hand, if one was concerned with e.g. biosecurity, then I do not see rationalists as being particularly engaged with that field, and it seems plausible that the best expertise for that subfield of x-risk would be found elsewhere.
For rationality: It’s true that I have gotten a lot of rationality out of the Sequences, LW in general, CFAR etc. On the other hand, the Sequences/LW value was obviously gotten through online interaction. And I feel like I have also gotten a lot of rationality out of IFS training, meditation teachers, methodology discussion of various disciplines, etc. So again it seems like the relevant question is less “are you interested in rationality in general” and more “what kind of rationality are you most interested in and where are the best people with the particular skillset that most relates to that”: some of those may be found in the rationalist community, but for many subsets of rationality, the best teachers are probably elsewhere.
Are rationalist organizations/mentors likely to be significantly better than non-rationalist ones?
If you’re trying to get mentored in x-risk or rationality, yes.
Even those don’t seem completely obvious to me.
I agree that in general, rationalists have a valuable package of insights that isn’t found elsewhere, but “this package is deep enough and important enough to necessitate working directly with experts on an ongoing basis” is a very high bar. A lot of the relevant knowledge in x-risk and rationality can be obtained more cheaply by reading LW and papers, visiting an event or workshop a few times a year, etc. I agree that there are probably some subsets of x-risk and rationality that are such that rationalists happen to hold the best knowledge about it, and that if those are the things you’re the most interested in, then it might pay to work with/be mentored by rationalists in particular. But it looks to me like there are plausibly also large subsets of both x-risk and rationality for which the best knowledge is found elsewhere, and for a person interested in those subsets in particular, it’s enough to extract the other rationalist insights by shallower means than constant interaction.
For x-risk: there are many fields in which scholarship and amount of experience within a field does not actually develop real expertise: “experts” perform little better than novices. “X-risk in general” looks exactly like the kind of field where this applies, as it’s much closer to the “bad performance” than “good performance” column of Shanteau’s table of which domains allow for expertise. In particular, developing expertise requires repeated objective feedback so as to allow you to revise your predictions and models: for x-risk as well as other similar domains (intelligence analysis etc.), such feedback is missing and mostly subjective when it exists. So the default assumption should be that people who have devoted a lot of time studying x-risk in general are not going to be much better than people who only have limited exposure.
Now if you move to some specific subset of x-risk work, such as AI risk or biosecurity, you might get better feedback loops. I talked about this in a presentation at GoCAS that I held a couple of years ago, and suggested that something like this is the way to go. Worth noting that despite spending two months hanging out with x-risk scholars in person there, I didn’t feel like I’d have gotten a lot of valuable new information about x-risk in general.
But then the question is “does some community have strong feedback loops about this subfield of x-risk”, not “is the community rationalist”. It does seem plausible to me that with regard to at least some subsets of AI risk work in particular, the people who are most tightly embedded with useful work also tend to be rationalists to at least some extent. On the other hand, if one was concerned with e.g. biosecurity, then I do not see rationalists as being particularly engaged with that field, and it seems plausible that the best expertise for that subfield of x-risk would be found elsewhere.
For rationality: It’s true that I have gotten a lot of rationality out of the Sequences, LW in general, CFAR etc. On the other hand, the Sequences/LW value was obviously gotten through online interaction. And I feel like I have also gotten a lot of rationality out of IFS training, meditation teachers, methodology discussion of various disciplines, etc. So again it seems like the relevant question is less “are you interested in rationality in general” and more “what kind of rationality are you most interested in and where are the best people with the particular skillset that most relates to that”: some of those may be found in the rationalist community, but for many subsets of rationality, the best teachers are probably elsewhere.