So, the thing I actually said in the other thread was:
Naively attempting to merge the latest dev branch back into “Sequences Era LessWrong” results in merge conflicts, and it’s unclear when this is because:
“oh, we just haven’t written up the right explanations to make sure this was backwards compatible”, vs
“oh, these were just some ideas we were experimenting with that didn’t pan out” vs
“oh, this integration-test-failure is actually an indicator that something was wrong with the idea.”
“oh, actually, it’s original LessWrong sequences that are wrong here, not CFAR, and the intergration tests need to be rewritten”
And I stand by this. Regardless of what you think of the “private dev branch”, I think now is a good time to pay down research debt and figure out how to integrate it into a cohesive, well-tested whole.
As for my actual opinion: to continue the metaphor, my guess is that the private dev branch is better overall, but, well, buggier. (The “it’s better” opinion comes from personal experience and observation. My strong experience is that the work and thinking I’m most excited about comes from people who have experience with both the LW sequences and the CFAR content). There’s a bunch of stuff the sequences just didn’t do, in terms of figuring out how to translate abstract concepts into something actionable.
My sense of the “bugginess” is in large part because people keep pushing the outer limits of what we understand well about how to learn and practice rationality, and the outer-limits are always going to be less-well tested and understood.
So, the thing I actually said in the other thread was:
And I stand by this. Regardless of what you think of the “private dev branch”, I think now is a good time to pay down research debt and figure out how to integrate it into a cohesive, well-tested whole.
As for my actual opinion: to continue the metaphor, my guess is that the private dev branch is better overall, but, well, buggier. (The “it’s better” opinion comes from personal experience and observation. My strong experience is that the work and thinking I’m most excited about comes from people who have experience with both the LW sequences and the CFAR content). There’s a bunch of stuff the sequences just didn’t do, in terms of figuring out how to translate abstract concepts into something actionable.
My sense of the “bugginess” is in large part because people keep pushing the outer limits of what we understand well about how to learn and practice rationality, and the outer-limits are always going to be less-well tested and understood.