Curated. CFAR’s content has been quite valuable to me personally, and I know many people who have found the individual techniques and overall mindset valuable. (Moreoever, many of those concepts have went on to become building blocks of further rationality development)
Some of my thoughts here feature in the recent AMA: I think of CFAR as having “forked the LW epistemological codebase”, and then going on to do a bunch of development in a private branch. I think a lot of issues from the past few years have come from disconnects between people who having been using ‘the private beta branch’ and people using the classic ‘LessWrong 1.0 epistemological framework.’”
I do hope at some point for a sequence that’s more explicitly designed to be standalone, without having gone to a CFAR workshop. But meanwhile it seems good to have this document more publicly accessible.
Note that this handbook covers maybe only about 2⁄3 of the progress made in that private beta branch, with the remaining third divided into “happened while I was there but hasn’t been written up (hopefully ‘yet’)” and “happened since my departure, and unclear whether anyone will have the time and priority to export it.”
To be clear, I don’t think you mean “This explains about 2/3rds of what CFAR learned about rationality”. I think you mean “This is an artifact that records about 2/3rds of the concrete, teachable techniques that CFAR’s understanding of rationality has output.” (I think I’m right, but happy to be corrected if I’m wrong.)
I actually think 2/3rds seems high – part of my point of “Moreoever, many of those concepts have went on to become building blocks of further rationality development” was that I think there’s lots of followup work, which is where the “real” value lives.
Some things I’m thinking of include:
The development of Focusing into Belief Reporting (this is, to be fair, more on the Leverage private beta branch, but I think realistically there’s a couple overlapping private beta branches that are co-dependent)
I don’t know if BeWellTuned.com was especially downstream of CFAR, but my own engagement with it was fairly dependent on being enmeshed in the CFAR community.
I have some impression that this is fairly tip-of-the-iceberg-esque (although admittedly I’m not that confident in that)
The author of BeWellTuned.com had contact with the LessWrong community and did go to CFAR. I’m not sure whether they went to CFAR before or after writing the website. It might very well have been writing to show people what they thought when they went to the Bay Area.
I think of CFAR as having “forked the LW epistemological codebase”, and then going on to do a bunch of development in a private branch. I think a lot of issues from the past few years have come from disconnects between people who having been using ‘the private beta branch’ and people using the classic ‘LessWrong 1.0 epistemological framework.’”
This rings true, and I like the metaphor. However, you seem to imply that the Open Source original branch is not as good as the private fork, pushed by a handful of people with a high turnover rate, which could be true but is harder to agree with.
So, the thing I actually said in the other thread was:
Naively attempting to merge the latest dev branch back into “Sequences Era LessWrong” results in merge conflicts, and it’s unclear when this is because:
“oh, we just haven’t written up the right explanations to make sure this was backwards compatible”, vs
“oh, these were just some ideas we were experimenting with that didn’t pan out” vs
“oh, this integration-test-failure is actually an indicator that something was wrong with the idea.”
“oh, actually, it’s original LessWrong sequences that are wrong here, not CFAR, and the intergration tests need to be rewritten”
And I stand by this. Regardless of what you think of the “private dev branch”, I think now is a good time to pay down research debt and figure out how to integrate it into a cohesive, well-tested whole.
As for my actual opinion: to continue the metaphor, my guess is that the private dev branch is better overall, but, well, buggier. (The “it’s better” opinion comes from personal experience and observation. My strong experience is that the work and thinking I’m most excited about comes from people who have experience with both the LW sequences and the CFAR content). There’s a bunch of stuff the sequences just didn’t do, in terms of figuring out how to translate abstract concepts into something actionable.
My sense of the “bugginess” is in large part because people keep pushing the outer limits of what we understand well about how to learn and practice rationality, and the outer-limits are always going to be less-well tested and understood.
Curated. CFAR’s content has been quite valuable to me personally, and I know many people who have found the individual techniques and overall mindset valuable. (Moreoever, many of those concepts have went on to become building blocks of further rationality development)
Some of my thoughts here feature in the recent AMA: I think of CFAR as having “forked the LW epistemological codebase”, and then going on to do a bunch of development in a private branch. I think a lot of issues from the past few years have come from disconnects between people who having been using ‘the private beta branch’ and people using the classic ‘LessWrong 1.0 epistemological framework.’”
I think this was basically fine (for reasons habryka gets at here and Geoff Anders gets at here), but it does mean there’s a bunch of ‘research debt’ that needs to be paid off.
I do hope at some point for a sequence that’s more explicitly designed to be standalone, without having gone to a CFAR workshop. But meanwhile it seems good to have this document more publicly accessible.
Note that this handbook covers maybe only about 2⁄3 of the progress made in that private beta branch, with the remaining third divided into “happened while I was there but hasn’t been written up (hopefully ‘yet’)” and “happened since my departure, and unclear whether anyone will have the time and priority to export it.”
To be clear, I don’t think you mean “This explains about 2/3rds of what CFAR learned about rationality”. I think you mean “This is an artifact that records about 2/3rds of the concrete, teachable techniques that CFAR’s understanding of rationality has output.” (I think I’m right, but happy to be corrected if I’m wrong.)
I actually think 2/3rds seems high – part of my point of “Moreoever, many of those concepts have went on to become building blocks of further rationality development” was that I think there’s lots of followup work, which is where the “real” value lives.
Some things I’m thinking of include:
The development of Focusing into Belief Reporting (this is, to be fair, more on the Leverage private beta branch, but I think realistically there’s a couple overlapping private beta branches that are co-dependent)
I don’t know if BeWellTuned.com was especially downstream of CFAR, but my own engagement with it was fairly dependent on being enmeshed in the CFAR community.
I have some impression that this is fairly tip-of-the-iceberg-esque (although admittedly I’m not that confident in that)
The author of BeWellTuned.com had contact with the LessWrong community and did go to CFAR. I’m not sure whether they went to CFAR before or after writing the website. It might very well have been writing to show people what they thought when they went to the Bay Area.
This rings true, and I like the metaphor. However, you seem to imply that the Open Source original branch is not as good as the private fork, pushed by a handful of people with a high turnover rate, which could be true but is harder to agree with.
So, the thing I actually said in the other thread was:
And I stand by this. Regardless of what you think of the “private dev branch”, I think now is a good time to pay down research debt and figure out how to integrate it into a cohesive, well-tested whole.
As for my actual opinion: to continue the metaphor, my guess is that the private dev branch is better overall, but, well, buggier. (The “it’s better” opinion comes from personal experience and observation. My strong experience is that the work and thinking I’m most excited about comes from people who have experience with both the LW sequences and the CFAR content). There’s a bunch of stuff the sequences just didn’t do, in terms of figuring out how to translate abstract concepts into something actionable.
My sense of the “bugginess” is in large part because people keep pushing the outer limits of what we understand well about how to learn and practice rationality, and the outer-limits are always going to be less-well tested and understood.