this post contains too much about external factors that might have impeded the Craft-and-Community project, and not enough on what project work was done, why that work didn’t succeed, and/or why it wasn’t enough.
In retrospect, this is true, although CFAR got off much more lightly in the final version than the first draft partially because there were only so many hills worth dying on.
I shall give you my thoughts on why the sequences didn’t “produce the desired rocket”:
The vast majority of us weren’t even trying to build the rocket. Yud outlined the design and requested that the community go forth and build the rocket, but most of us were space enthusiasts, not engineers. Granted, if we truly cared about getting the task done, we could have took a leaf from Elon’s book and started reading through old soviet rocket manuals, but most of us didn’t emotionally value rocketbuilding enough to actually do it.
Since the group that formed were mostly space enthusiasts, the status hierarchy that formed within the group was “who could describe the desire for interplanetary exploration most vividly”, not anything remotely correlated with engineering ability, so few engineers felt welcome in the community since their talents weren’t actually valued.
The Center For Aerospace Rocketry that was meant to be training people to build rockets contained no actual rocket scientists, nor did it see any point in hiring any. It attempted to develop new techniques for thinking about how to build rockets, but since none of them ever spent much time building rockets the material that ended up being taught to students was how to write vivid space metaphors and vague speculative guesses on how to think about building rockets.
Yud warned about the perils of going straight into rocket science teaching after hearing rumours that someone was going to take his rocket design and try to teach people to be rocket scientists with it, the community of space enthusiasts assured him he was talking nonsense and refused to listen to him.
Every person who is trying to build the rocket is doing so on their own because the space enthusiasts aren’t paying attention. The engineers are hashing out the mechanical properties of various aluminuim alloys which is boooorinnnnngggggg compared to shiny new space metaphors.
The people who have taken it upon themselves to create tools to make the rocket builders more effective at building the rockets (physical object level tools, not thinking meta level tools) are routinely asked to explain why on earth they are spending their time on such pointless nonsense.
The space enthusiast community that congregated around the rocket design plan gets rather exassperated any time someone interested in engineering points out that they were asked to build the rocket and instead sat around talking about the wonders of spacetravel
The widespread attitude that “yeah, obviously someone should build the rocket. But it isn’t going to be me, nor am I going to help anyone, or even avoid looking at people trying to build the rocket as kinda crazy”
Most of the space enthusiasts believe that the Soviet Robot will be ressurected and build the rocket for them far better than they ever could, so we might as well wait for that to happen.
Building rockets requires so many little components of knowledge that one person or even a small group of people cannot build a rocket on their own. When the people who were supposed to build the rocket instead don’t and then passively undermine any attempts to actually build the rocket, no rocket gets built.
Getting back to the object level point
Very little work on the craft was actually ever done. CFAR originally told people they were going to develop craft, but only ever bothered trying to develop meta-craft. The people who wanted craft for themselves made do with externally produced material and sorting through all the nonsense to get to the half-decent stuff.
You want post-mortems of craft development, I can’t give you any, because there aren’t any goddamn bodies to examine. I tried to do some craft development myself, and did a little bit of it in an ad-hoc way, but putting in the huge amount of time and effort to formally systemize it never happened because the initial response from the audience was “meh, we don’t really care or see the value in it” so I went back to doing what I would be rewarded for.
It eventually started bothering me enough that I devised a way to fix the incentive structure so object-level craft development would start happening.
Edit: forgot about Gwern.net which hasn’t failed, but has had limited impact and very few focus areas, although the nootropics section is pretty useful.
This is somewhat closer to what I was asking for, but still mostly about group dynamics rather than engineering (or rather, the analogue of “engineering” on the other side of the rocket analogy). But I take your point that it’s hard to talk about engineering if the team culture was so bad that no one ever tried any engineering.
I do think that it would be very helpful for you to give more specifics, even if they’re specifics like “this person/organization is doing something that is stupid and irrelevant for these reasons.” (If the engineers spent all their time working on their crackpot perpetual motion machine, describe that.)
Basically, I’m asking you to name names (of people/orgs), and give many more specifics about the names you have named (CFAR). This would require you to be less diplomatic than you have been so far, and may antagonize some people. But look, you’re trying to get people to move to a different city (in most cases, a different country) to be part of your new project. You’ve mostly motivated that project by saying, in broad terms, that currently existing rationalists are doing most everything wrong. Moving to a different country is already a risky move, and the EV goes down sharply once some currently-existing-rationalist considers that the founder may well fundamentally disagree with their goals and assumptions. The only way to make this look positive-EV to very many individuals would be to show much more explicitly which sorts of currently-existing-rationalists you disapprove of, so that individuals are able to say “okay, that’s not me.”
See e.g. this bit from one of your other comments
This depends entirely on how you measure it. If I was to throw all other goals under the bus for the sake of proving you wrong, I’m pretty sure I could find enough women to nod along to a watered down version. If instead we’re going for rationalist Rationalists then a lot of the fandom people wouldn’t make the cut and I suspect if we managed to outdo tech, we would be beating The Bay.
Consider an individual woman trying to decide whether to move to Manchester from Berkeley. As it stands, you have a not explicitly stated theory of what makes the good kind of rationalist, such that many/most female Berkeley rats do not qualify. Without further information, the woman will conclude that she probably does not qualify, in which case she’s definitely not going to move. The only way to fix this is to articulate the theory directly so individuals can check themselves against it.
Okay, here are the list of rationalist relevant projects that I could think of, and what I think went wrong. If you know of any more, I may actually have opinions, but have just forgotten they were part of this category.
CFAR—Originally kinda tried to do the thing, realised it was hard and that it would be a lot of work, ended up doing the meta thing and optimizing for that. As it matured, it basically became Rationality University, with all the connotations that would imply. A lot of people who had been around for a while and absorbed the cultural memes openly admitted they were only shelling out the cash in order to gain access to the alumni network, although in a sense that might not be zero sum much in the same way $5000-a-plate inneffective charity dinners can be—the commitment mechanism increases the group’s quality and trust such that it is worth paying the price of admission.
MIRI—Took a while to get organization competence, and in the early days the ops stuff was pretty amateur, and this was pointed out by external evaluations. Since then they have apparently improved, and I take this claim at face value. There still seems to be a few strategic errors, mostly in the signalling department, that are putting up barriers to getting themselves taken seriously. I mean I can understand why Yud wouldn’t want to take time out to get a PhD but not obvious easily solvable things like “38 year old men should not have a publicly accessible Facebook profile with a My Little Pony picture in the photos section”.
MetaMed—relatively little to say here. it has meta in the name, it was not and never pretended to be object level craft, and that’s totally fine. Worth pointing out that they would have greatly benefited from object-level craft existing on effective delegation and marketing so they didn’t have to learn it the hard way. As they said, startups failing is a poor signal of anything since it’s the expected outcome.
Dragon’s army—At least Duncan is trying something different. The unimpressive resume boasts and the lack of prediction as to how bad it would look were certainly points against him. But from the outside, the operations and implementation of the goals originally set out seems to be pretty solid, regardless of how useful that kind of generalism is to Bay Area programmers.
Accelerator—Formally disbanded due to personal events in the leader’s life, but while it was operating, I struggle to think of a single thing it was doing right. There was a bit of a norms violation on saying “we’re going to work off a consensus of volunteers” but actually operating off of unexplained royal decrees by the person with the most money (but not actually enough money, the capital Eric had available would have covered much less than half of what was needed to execute his plan)
Arbital—I think this failed because they tried to be Wikipedia, but better, run as a startup. Of all the work required to create Wikipedia, the code is essentially the easy part. Sure, it requires talent, but in terms of raw hours it was probably less than 1% of the time involved making Wikipedia the world’s knowledge base. I wasn’t close enough to observe but I’ve got a pretty strong feeling that it was almost if not entirely staffed with ingroup members, and that outside people who had valuable connections to volunteers prepared to do the grunt work were overlooked because they didn’t have enough “culture fit”.
Miscellanious AI groups—I have no specific opinion on their competence, but I’m against the idea that keeps getting thrown about that goes something like “you can’t say were not acheiving real things, look! we have [list of 10 AI organizations]” research in this area should happen, but when more than half of rationalist organizations are either directly to do with AI or have it as a focus area, it makes us a bit of a one trick pony. Not to mention that AI was explicitly never meant to be the be all and end all of rationalism.
Leverage research—The thing that “leverage” refers to is the meta level, and basically funds people to sit around in a room thinking of insights on 1-3 meta levels, in the hope that one of them someday will have a spark of brilliance and end up finding something that can be applied to make 100 million to bring the whole operation back out of the red. This is fine, and if you have the financial leverage to implement such an insight, it makes sense to fund something like this.
80K hours—This is probably the closest thing to systemized object level craft that exists on a scale that has much impact. Occasionally ends up implying harmful untrue things like “you should quit your non-replaceable-due-to-quotas job as a doctor to become a charity researcher”. A slight shame that they don’t just offer career advice impartially, rather than optimizing getting others to be more altruistic.
EA stuff—good that it exists, and varying levels of opinion of their competence, depending on the org. Meta point here that rational charity evaluation and Xrisk research are not the only areas people can or should do systemized winning in.
Consider an individual woman trying to decide whether to move to Manchester from Berkeley. As it stands, you have a not explicitly stated theory of what makes the good kind of rationalist, such that many/most female Berkeley rats do not qualify. Without further information, the woman will conclude that she probably does not qualify, in which case she’s definitely not going to move. The only way to fix this is to articulate the theory directly so individuals can check themselves against it.
First of all, the impression that rationalists take ideas seriously enough that the new information brought to light will cause half of Berkeley to vacate to Manchester is simply untrue. Even if the facts implied that they should move, which they don’t necessarily depending on your goals. This is not how people work, this isn’t even how rationalists work. How many people bothered to systematically and rigarously evaluate moving to Berkeley before doing so? I’d wager less than 10%. So it would make sense that laying out the math to prove they messed up won’t actually persuade the other 90% to leave.
I’d be surprised if more than three people who read this in the ~500 person Berkeley community actually decide to up sticks and move in the coming year.
If there are any women thinking of moving, the reaction is likely because their reaction when reading this essay was something like “OMG this is what I’ve been wanting for years” rather than because I might consider them a real rationalist.
“What does it mean to be a rationalist” is a broad topic and overlaps with questions like “what personality traits do you need to be a rationalist” and I can’t say I have a conclusive answer, but if you want a half formed one, something like:
has read at least 75% of the sequences
strong curiosity about what’s true/strong appetite for knowledge
tries very hard to put emotions aside when truthseeking
contributes towards the advancement of knowledge in some way
Thanks—this is informative and I think it will be useful for anyone trying to decide what to make of your project.
I have disagreements about the “individual woman” example but I’m not sure it’s worth hashing it out, since it gets into some thorny stuff about persuasion/rhetoric that I’m sure we both have strong opinions on.
Regarding MIRI, I want to note that although the organization has certainly become more competently managed, the more recent OpenPhil review included some very interesting and pointed criticism of the technical work, which I’m not sure enough people saw, as it was hidden in a supplemental PDF. Clearly this is not the place to hash out those technical issues, but they are worth noting, since the reviewer objections were more “these results do not move you toward you stated goal in this paper” than “your stated goal is pointless or quixotic,” so if true they are identifying a rationality failure.
In retrospect, this is true, although CFAR got off much more lightly in the final version than the first draft partially because there were only so many hills worth dying on.
I shall give you my thoughts on why the sequences didn’t “produce the desired rocket”:
The vast majority of us weren’t even trying to build the rocket. Yud outlined the design and requested that the community go forth and build the rocket, but most of us were space enthusiasts, not engineers. Granted, if we truly cared about getting the task done, we could have took a leaf from Elon’s book and started reading through old soviet rocket manuals, but most of us didn’t emotionally value rocketbuilding enough to actually do it.
Since the group that formed were mostly space enthusiasts, the status hierarchy that formed within the group was “who could describe the desire for interplanetary exploration most vividly”, not anything remotely correlated with engineering ability, so few engineers felt welcome in the community since their talents weren’t actually valued.
The Center For Aerospace Rocketry that was meant to be training people to build rockets contained no actual rocket scientists, nor did it see any point in hiring any. It attempted to develop new techniques for thinking about how to build rockets, but since none of them ever spent much time building rockets the material that ended up being taught to students was how to write vivid space metaphors and vague speculative guesses on how to think about building rockets.
Yud warned about the perils of going straight into rocket science teaching after hearing rumours that someone was going to take his rocket design and try to teach people to be rocket scientists with it, the community of space enthusiasts assured him he was talking nonsense and refused to listen to him.
Every person who is trying to build the rocket is doing so on their own because the space enthusiasts aren’t paying attention. The engineers are hashing out the mechanical properties of various aluminuim alloys which is boooorinnnnngggggg compared to shiny new space metaphors.
The people who have taken it upon themselves to create tools to make the rocket builders more effective at building the rockets (physical object level tools, not thinking meta level tools) are routinely asked to explain why on earth they are spending their time on such pointless nonsense.
The space enthusiast community that congregated around the rocket design plan gets rather exassperated any time someone interested in engineering points out that they were asked to build the rocket and instead sat around talking about the wonders of spacetravel
The widespread attitude that “yeah, obviously someone should build the rocket. But it isn’t going to be me, nor am I going to help anyone, or even avoid looking at people trying to build the rocket as kinda crazy”
Most of the space enthusiasts believe that the Soviet Robot will be ressurected and build the rocket for them far better than they ever could, so we might as well wait for that to happen.
Building rockets requires so many little components of knowledge that one person or even a small group of people cannot build a rocket on their own. When the people who were supposed to build the rocket instead don’t and then passively undermine any attempts to actually build the rocket, no rocket gets built.
Getting back to the object level point
Very little work on the craft was actually ever done. CFAR originally told people they were going to develop craft, but only ever bothered trying to develop meta-craft. The people who wanted craft for themselves made do with externally produced material and sorting through all the nonsense to get to the half-decent stuff.
You want post-mortems of craft development, I can’t give you any, because there aren’t any goddamn bodies to examine. I tried to do some craft development myself, and did a little bit of it in an ad-hoc way, but putting in the huge amount of time and effort to formally systemize it never happened because the initial response from the audience was “meh, we don’t really care or see the value in it” so I went back to doing what I would be rewarded for.
It eventually started bothering me enough that I devised a way to fix the incentive structure so object-level craft development would start happening.
Edit: forgot about Gwern.net which hasn’t failed, but has had limited impact and very few focus areas, although the nootropics section is pretty useful.
This is somewhat closer to what I was asking for, but still mostly about group dynamics rather than engineering (or rather, the analogue of “engineering” on the other side of the rocket analogy). But I take your point that it’s hard to talk about engineering if the team culture was so bad that no one ever tried any engineering.
I do think that it would be very helpful for you to give more specifics, even if they’re specifics like “this person/organization is doing something that is stupid and irrelevant for these reasons.” (If the engineers spent all their time working on their crackpot perpetual motion machine, describe that.)
Basically, I’m asking you to name names (of people/orgs), and give many more specifics about the names you have named (CFAR). This would require you to be less diplomatic than you have been so far, and may antagonize some people. But look, you’re trying to get people to move to a different city (in most cases, a different country) to be part of your new project. You’ve mostly motivated that project by saying, in broad terms, that currently existing rationalists are doing most everything wrong. Moving to a different country is already a risky move, and the EV goes down sharply once some currently-existing-rationalist considers that the founder may well fundamentally disagree with their goals and assumptions. The only way to make this look positive-EV to very many individuals would be to show much more explicitly which sorts of currently-existing-rationalists you disapprove of, so that individuals are able to say “okay, that’s not me.”
See e.g. this bit from one of your other comments
Consider an individual woman trying to decide whether to move to Manchester from Berkeley. As it stands, you have a not explicitly stated theory of what makes the good kind of rationalist, such that many/most female Berkeley rats do not qualify. Without further information, the woman will conclude that she probably does not qualify, in which case she’s definitely not going to move. The only way to fix this is to articulate the theory directly so individuals can check themselves against it.
Okay, here are the list of rationalist relevant projects that I could think of, and what I think went wrong. If you know of any more, I may actually have opinions, but have just forgotten they were part of this category.
CFAR—Originally kinda tried to do the thing, realised it was hard and that it would be a lot of work, ended up doing the meta thing and optimizing for that. As it matured, it basically became Rationality University, with all the connotations that would imply. A lot of people who had been around for a while and absorbed the cultural memes openly admitted they were only shelling out the cash in order to gain access to the alumni network, although in a sense that might not be zero sum much in the same way $5000-a-plate inneffective charity dinners can be—the commitment mechanism increases the group’s quality and trust such that it is worth paying the price of admission.
MIRI—Took a while to get organization competence, and in the early days the ops stuff was pretty amateur, and this was pointed out by external evaluations. Since then they have apparently improved, and I take this claim at face value. There still seems to be a few strategic errors, mostly in the signalling department, that are putting up barriers to getting themselves taken seriously. I mean I can understand why Yud wouldn’t want to take time out to get a PhD but not obvious easily solvable things like “38 year old men should not have a publicly accessible Facebook profile with a My Little Pony picture in the photos section”.
MetaMed—relatively little to say here. it has meta in the name, it was not and never pretended to be object level craft, and that’s totally fine. Worth pointing out that they would have greatly benefited from object-level craft existing on effective delegation and marketing so they didn’t have to learn it the hard way. As they said, startups failing is a poor signal of anything since it’s the expected outcome.
Dragon’s army—At least Duncan is trying something different. The unimpressive resume boasts and the lack of prediction as to how bad it would look were certainly points against him. But from the outside, the operations and implementation of the goals originally set out seems to be pretty solid, regardless of how useful that kind of generalism is to Bay Area programmers.
Accelerator—Formally disbanded due to personal events in the leader’s life, but while it was operating, I struggle to think of a single thing it was doing right. There was a bit of a norms violation on saying “we’re going to work off a consensus of volunteers” but actually operating off of unexplained royal decrees by the person with the most money (but not actually enough money, the capital Eric had available would have covered much less than half of what was needed to execute his plan)
Arbital—I think this failed because they tried to be Wikipedia, but better, run as a startup. Of all the work required to create Wikipedia, the code is essentially the easy part. Sure, it requires talent, but in terms of raw hours it was probably less than 1% of the time involved making Wikipedia the world’s knowledge base. I wasn’t close enough to observe but I’ve got a pretty strong feeling that it was almost if not entirely staffed with ingroup members, and that outside people who had valuable connections to volunteers prepared to do the grunt work were overlooked because they didn’t have enough “culture fit”.
Miscellanious AI groups—I have no specific opinion on their competence, but I’m against the idea that keeps getting thrown about that goes something like “you can’t say were not acheiving real things, look! we have [list of 10 AI organizations]” research in this area should happen, but when more than half of rationalist organizations are either directly to do with AI or have it as a focus area, it makes us a bit of a one trick pony. Not to mention that AI was explicitly never meant to be the be all and end all of rationalism.
Leverage research—The thing that “leverage” refers to is the meta level, and basically funds people to sit around in a room thinking of insights on 1-3 meta levels, in the hope that one of them someday will have a spark of brilliance and end up finding something that can be applied to make 100 million to bring the whole operation back out of the red. This is fine, and if you have the financial leverage to implement such an insight, it makes sense to fund something like this.
80K hours—This is probably the closest thing to systemized object level craft that exists on a scale that has much impact. Occasionally ends up implying harmful untrue things like “you should quit your non-replaceable-due-to-quotas job as a doctor to become a charity researcher”. A slight shame that they don’t just offer career advice impartially, rather than optimizing getting others to be more altruistic.
EA stuff—good that it exists, and varying levels of opinion of their competence, depending on the org. Meta point here that rational charity evaluation and Xrisk research are not the only areas people can or should do systemized winning in.
First of all, the impression that rationalists take ideas seriously enough that the new information brought to light will cause half of Berkeley to vacate to Manchester is simply untrue. Even if the facts implied that they should move, which they don’t necessarily depending on your goals. This is not how people work, this isn’t even how rationalists work. How many people bothered to systematically and rigarously evaluate moving to Berkeley before doing so? I’d wager less than 10%. So it would make sense that laying out the math to prove they messed up won’t actually persuade the other 90% to leave.
I’d be surprised if more than three people who read this in the ~500 person Berkeley community actually decide to up sticks and move in the coming year.
If there are any women thinking of moving, the reaction is likely because their reaction when reading this essay was something like “OMG this is what I’ve been wanting for years” rather than because I might consider them a real rationalist.
“What does it mean to be a rationalist” is a broad topic and overlaps with questions like “what personality traits do you need to be a rationalist” and I can’t say I have a conclusive answer, but if you want a half formed one, something like:
has read at least 75% of the sequences
strong curiosity about what’s true/strong appetite for knowledge
tries very hard to put emotions aside when truthseeking
contributes towards the advancement of knowledge in some way
Thanks—this is informative and I think it will be useful for anyone trying to decide what to make of your project.
I have disagreements about the “individual woman” example but I’m not sure it’s worth hashing it out, since it gets into some thorny stuff about persuasion/rhetoric that I’m sure we both have strong opinions on.
Regarding MIRI, I want to note that although the organization has certainly become more competently managed, the more recent OpenPhil review included some very interesting and pointed criticism of the technical work, which I’m not sure enough people saw, as it was hidden in a supplemental PDF. Clearly this is not the place to hash out those technical issues, but they are worth noting, since the reviewer objections were more “these results do not move you toward you stated goal in this paper” than “your stated goal is pointless or quixotic,” so if true they are identifying a rationality failure.