Thanks for weighing in; I trust these conversations a lot more when they have multiple people from current or former CFAR. (For anyone not tracking, Unreal worked at CFAR for awhile.) (And, sorry, I know you said you’re mainly writing this to not-me, but I want to engage anyhow.)
The hypotheses listed mostly focus on the internal aspects of CFAR.
This may be somewhat misleading to a naive reader. (I am speaking mainly to this hypothetical naive reader, not to Anna, who is non-naive.)
.… It’s good FOR CFAR to consider what the org could improve on (which is where its leverage is), but for a big picture view of it, you should also think about the overall landscape and circumstances surrounding CFAR. And some of this was probably not obvious at the outset (at the beginning of its existence), and so CFAR may have had to discover where certain major roadblocks were, as they tried to drive forward. This post doesn’t seem to touch on those roadblocks in particular, maybe because they’re not as interesting as considering the potential leverage points.
Re: the above: I was actually trying to focus on not-specific-to-us-as-individuals factors that made the problem hard, or that made particular failure modes easy to fall into. I am hoping this post and its comments might be of use to both future-CFAR (e.g., future-me), and anyone aiming to build an “art of rationality” via some other group/place/effort.
So, if you skim over my hypotheses in the side-panel, they are things like “it’s difficult to distinguish effective and ineffective interventions” and “in practice, many/most domains incentivize social manipulation rather than rationality.” (Not things like “such-and-such an individual had such-and-such an unusual individual weakness.”)
That is, I’m trying to understand and describe the background conditions that, IMO, gradually pulled CFAR and its members toward kinds of activity that had less of a shot at creating a real art of rationality. (My examples do involve us-in-particular, but that’s because that’s where the data is; that’s what we know that others may want to know, when trying to build out an accurate picture of what paths have a shot at getting to a real art of rationality.)
I think we’re maybe tackling the same puzzle, then (the puzzle of “how can a group take a good shot at building an art of rationality / what major obstacles are in the way / what is a person likely to miss in their first attempt, that might be nice to instead know about?”). And we’re simply arriving at different guesses about the answers to that puzzle?
I think a careful and non-naive reading of your post would avoid the issues I was trying to address.
But I think a naive reading of your post might come across as something like, “Oh CFAR was just not that good at stuff I guess” / “These issues seem easy to resolve.”
So I felt it was important to acknowledge the magnitude of the ambition of CFAR and that such projects are actually quite difficult to pull off, especially in the post-modern information age.
//
I wish I could say I was speaking from an interest in tackling the puzzle. I’m not coming from there.
Thanks for weighing in; I trust these conversations a lot more when they have multiple people from current or former CFAR. (For anyone not tracking, Unreal worked at CFAR for awhile.) (And, sorry, I know you said you’re mainly writing this to not-me, but I want to engage anyhow.)
Re: the above: I was actually trying to focus on not-specific-to-us-as-individuals factors that made the problem hard, or that made particular failure modes easy to fall into. I am hoping this post and its comments might be of use to both future-CFAR (e.g., future-me), and anyone aiming to build an “art of rationality” via some other group/place/effort.
So, if you skim over my hypotheses in the side-panel, they are things like “it’s difficult to distinguish effective and ineffective interventions” and “in practice, many/most domains incentivize social manipulation rather than rationality.” (Not things like “such-and-such an individual had such-and-such an unusual individual weakness.”)
That is, I’m trying to understand and describe the background conditions that, IMO, gradually pulled CFAR and its members toward kinds of activity that had less of a shot at creating a real art of rationality. (My examples do involve us-in-particular, but that’s because that’s where the data is; that’s what we know that others may want to know, when trying to build out an accurate picture of what paths have a shot at getting to a real art of rationality.)
I think we’re maybe tackling the same puzzle, then (the puzzle of “how can a group take a good shot at building an art of rationality / what major obstacles are in the way / what is a person likely to miss in their first attempt, that might be nice to instead know about?”). And we’re simply arriving at different guesses about the answers to that puzzle?
Right.
I think a careful and non-naive reading of your post would avoid the issues I was trying to address.
But I think a naive reading of your post might come across as something like, “Oh CFAR was just not that good at stuff I guess” / “These issues seem easy to resolve.”
So I felt it was important to acknowledge the magnitude of the ambition of CFAR and that such projects are actually quite difficult to pull off, especially in the post-modern information age.
//
I wish I could say I was speaking from an interest in tackling the puzzle. I’m not coming from there.