This is somewhat closer to what I was asking for, but still mostly about group dynamics rather than engineering (or rather, the analogue of “engineering” on the other side of the rocket analogy). But I take your point that it’s hard to talk about engineering if the team culture was so bad that no one ever tried any engineering.
I do think that it would be very helpful for you to give more specifics, even if they’re specifics like “this person/organization is doing something that is stupid and irrelevant for these reasons.” (If the engineers spent all their time working on their crackpot perpetual motion machine, describe that.)
Basically, I’m asking you to name names (of people/orgs), and give many more specifics about the names you have named (CFAR). This would require you to be less diplomatic than you have been so far, and may antagonize some people. But look, you’re trying to get people to move to a different city (in most cases, a different country) to be part of your new project. You’ve mostly motivated that project by saying, in broad terms, that currently existing rationalists are doing most everything wrong. Moving to a different country is already a risky move, and the EV goes down sharply once some currently-existing-rationalist considers that the founder may well fundamentally disagree with their goals and assumptions. The only way to make this look positive-EV to very many individuals would be to show much more explicitly which sorts of currently-existing-rationalists you disapprove of, so that individuals are able to say “okay, that’s not me.”
See e.g. this bit from one of your other comments
This depends entirely on how you measure it. If I was to throw all other goals under the bus for the sake of proving you wrong, I’m pretty sure I could find enough women to nod along to a watered down version. If instead we’re going for rationalist Rationalists then a lot of the fandom people wouldn’t make the cut and I suspect if we managed to outdo tech, we would be beating The Bay.
Consider an individual woman trying to decide whether to move to Manchester from Berkeley. As it stands, you have a not explicitly stated theory of what makes the good kind of rationalist, such that many/most female Berkeley rats do not qualify. Without further information, the woman will conclude that she probably does not qualify, in which case she’s definitely not going to move. The only way to fix this is to articulate the theory directly so individuals can check themselves against it.
Okay, here are the list of rationalist relevant projects that I could think of, and what I think went wrong. If you know of any more, I may actually have opinions, but have just forgotten they were part of this category.
CFAR—Originally kinda tried to do the thing, realised it was hard and that it would be a lot of work, ended up doing the meta thing and optimizing for that. As it matured, it basically became Rationality University, with all the connotations that would imply. A lot of people who had been around for a while and absorbed the cultural memes openly admitted they were only shelling out the cash in order to gain access to the alumni network, although in a sense that might not be zero sum much in the same way $5000-a-plate inneffective charity dinners can be—the commitment mechanism increases the group’s quality and trust such that it is worth paying the price of admission.
MIRI—Took a while to get organization competence, and in the early days the ops stuff was pretty amateur, and this was pointed out by external evaluations. Since then they have apparently improved, and I take this claim at face value. There still seems to be a few strategic errors, mostly in the signalling department, that are putting up barriers to getting themselves taken seriously. I mean I can understand why Yud wouldn’t want to take time out to get a PhD but not obvious easily solvable things like “38 year old men should not have a publicly accessible Facebook profile with a My Little Pony picture in the photos section”.
MetaMed—relatively little to say here. it has meta in the name, it was not and never pretended to be object level craft, and that’s totally fine. Worth pointing out that they would have greatly benefited from object-level craft existing on effective delegation and marketing so they didn’t have to learn it the hard way. As they said, startups failing is a poor signal of anything since it’s the expected outcome.
Dragon’s army—At least Duncan is trying something different. The unimpressive resume boasts and the lack of prediction as to how bad it would look were certainly points against him. But from the outside, the operations and implementation of the goals originally set out seems to be pretty solid, regardless of how useful that kind of generalism is to Bay Area programmers.
Accelerator—Formally disbanded due to personal events in the leader’s life, but while it was operating, I struggle to think of a single thing it was doing right. There was a bit of a norms violation on saying “we’re going to work off a consensus of volunteers” but actually operating off of unexplained royal decrees by the person with the most money (but not actually enough money, the capital Eric had available would have covered much less than half of what was needed to execute his plan)
Arbital—I think this failed because they tried to be Wikipedia, but better, run as a startup. Of all the work required to create Wikipedia, the code is essentially the easy part. Sure, it requires talent, but in terms of raw hours it was probably less than 1% of the time involved making Wikipedia the world’s knowledge base. I wasn’t close enough to observe but I’ve got a pretty strong feeling that it was almost if not entirely staffed with ingroup members, and that outside people who had valuable connections to volunteers prepared to do the grunt work were overlooked because they didn’t have enough “culture fit”.
Miscellanious AI groups—I have no specific opinion on their competence, but I’m against the idea that keeps getting thrown about that goes something like “you can’t say were not acheiving real things, look! we have [list of 10 AI organizations]” research in this area should happen, but when more than half of rationalist organizations are either directly to do with AI or have it as a focus area, it makes us a bit of a one trick pony. Not to mention that AI was explicitly never meant to be the be all and end all of rationalism.
Leverage research—The thing that “leverage” refers to is the meta level, and basically funds people to sit around in a room thinking of insights on 1-3 meta levels, in the hope that one of them someday will have a spark of brilliance and end up finding something that can be applied to make 100 million to bring the whole operation back out of the red. This is fine, and if you have the financial leverage to implement such an insight, it makes sense to fund something like this.
80K hours—This is probably the closest thing to systemized object level craft that exists on a scale that has much impact. Occasionally ends up implying harmful untrue things like “you should quit your non-replaceable-due-to-quotas job as a doctor to become a charity researcher”. A slight shame that they don’t just offer career advice impartially, rather than optimizing getting others to be more altruistic.
EA stuff—good that it exists, and varying levels of opinion of their competence, depending on the org. Meta point here that rational charity evaluation and Xrisk research are not the only areas people can or should do systemized winning in.
Consider an individual woman trying to decide whether to move to Manchester from Berkeley. As it stands, you have a not explicitly stated theory of what makes the good kind of rationalist, such that many/most female Berkeley rats do not qualify. Without further information, the woman will conclude that she probably does not qualify, in which case she’s definitely not going to move. The only way to fix this is to articulate the theory directly so individuals can check themselves against it.
First of all, the impression that rationalists take ideas seriously enough that the new information brought to light will cause half of Berkeley to vacate to Manchester is simply untrue. Even if the facts implied that they should move, which they don’t necessarily depending on your goals. This is not how people work, this isn’t even how rationalists work. How many people bothered to systematically and rigarously evaluate moving to Berkeley before doing so? I’d wager less than 10%. So it would make sense that laying out the math to prove they messed up won’t actually persuade the other 90% to leave.
I’d be surprised if more than three people who read this in the ~500 person Berkeley community actually decide to up sticks and move in the coming year.
If there are any women thinking of moving, the reaction is likely because their reaction when reading this essay was something like “OMG this is what I’ve been wanting for years” rather than because I might consider them a real rationalist.
“What does it mean to be a rationalist” is a broad topic and overlaps with questions like “what personality traits do you need to be a rationalist” and I can’t say I have a conclusive answer, but if you want a half formed one, something like:
has read at least 75% of the sequences
strong curiosity about what’s true/strong appetite for knowledge
tries very hard to put emotions aside when truthseeking
contributes towards the advancement of knowledge in some way
Thanks—this is informative and I think it will be useful for anyone trying to decide what to make of your project.
I have disagreements about the “individual woman” example but I’m not sure it’s worth hashing it out, since it gets into some thorny stuff about persuasion/rhetoric that I’m sure we both have strong opinions on.
Regarding MIRI, I want to note that although the organization has certainly become more competently managed, the more recent OpenPhil review included some very interesting and pointed criticism of the technical work, which I’m not sure enough people saw, as it was hidden in a supplemental PDF. Clearly this is not the place to hash out those technical issues, but they are worth noting, since the reviewer objections were more “these results do not move you toward you stated goal in this paper” than “your stated goal is pointless or quixotic,” so if true they are identifying a rationality failure.
This is somewhat closer to what I was asking for, but still mostly about group dynamics rather than engineering (or rather, the analogue of “engineering” on the other side of the rocket analogy). But I take your point that it’s hard to talk about engineering if the team culture was so bad that no one ever tried any engineering.
I do think that it would be very helpful for you to give more specifics, even if they’re specifics like “this person/organization is doing something that is stupid and irrelevant for these reasons.” (If the engineers spent all their time working on their crackpot perpetual motion machine, describe that.)
Basically, I’m asking you to name names (of people/orgs), and give many more specifics about the names you have named (CFAR). This would require you to be less diplomatic than you have been so far, and may antagonize some people. But look, you’re trying to get people to move to a different city (in most cases, a different country) to be part of your new project. You’ve mostly motivated that project by saying, in broad terms, that currently existing rationalists are doing most everything wrong. Moving to a different country is already a risky move, and the EV goes down sharply once some currently-existing-rationalist considers that the founder may well fundamentally disagree with their goals and assumptions. The only way to make this look positive-EV to very many individuals would be to show much more explicitly which sorts of currently-existing-rationalists you disapprove of, so that individuals are able to say “okay, that’s not me.”
See e.g. this bit from one of your other comments
Consider an individual woman trying to decide whether to move to Manchester from Berkeley. As it stands, you have a not explicitly stated theory of what makes the good kind of rationalist, such that many/most female Berkeley rats do not qualify. Without further information, the woman will conclude that she probably does not qualify, in which case she’s definitely not going to move. The only way to fix this is to articulate the theory directly so individuals can check themselves against it.
Okay, here are the list of rationalist relevant projects that I could think of, and what I think went wrong. If you know of any more, I may actually have opinions, but have just forgotten they were part of this category.
CFAR—Originally kinda tried to do the thing, realised it was hard and that it would be a lot of work, ended up doing the meta thing and optimizing for that. As it matured, it basically became Rationality University, with all the connotations that would imply. A lot of people who had been around for a while and absorbed the cultural memes openly admitted they were only shelling out the cash in order to gain access to the alumni network, although in a sense that might not be zero sum much in the same way $5000-a-plate inneffective charity dinners can be—the commitment mechanism increases the group’s quality and trust such that it is worth paying the price of admission.
MIRI—Took a while to get organization competence, and in the early days the ops stuff was pretty amateur, and this was pointed out by external evaluations. Since then they have apparently improved, and I take this claim at face value. There still seems to be a few strategic errors, mostly in the signalling department, that are putting up barriers to getting themselves taken seriously. I mean I can understand why Yud wouldn’t want to take time out to get a PhD but not obvious easily solvable things like “38 year old men should not have a publicly accessible Facebook profile with a My Little Pony picture in the photos section”.
MetaMed—relatively little to say here. it has meta in the name, it was not and never pretended to be object level craft, and that’s totally fine. Worth pointing out that they would have greatly benefited from object-level craft existing on effective delegation and marketing so they didn’t have to learn it the hard way. As they said, startups failing is a poor signal of anything since it’s the expected outcome.
Dragon’s army—At least Duncan is trying something different. The unimpressive resume boasts and the lack of prediction as to how bad it would look were certainly points against him. But from the outside, the operations and implementation of the goals originally set out seems to be pretty solid, regardless of how useful that kind of generalism is to Bay Area programmers.
Accelerator—Formally disbanded due to personal events in the leader’s life, but while it was operating, I struggle to think of a single thing it was doing right. There was a bit of a norms violation on saying “we’re going to work off a consensus of volunteers” but actually operating off of unexplained royal decrees by the person with the most money (but not actually enough money, the capital Eric had available would have covered much less than half of what was needed to execute his plan)
Arbital—I think this failed because they tried to be Wikipedia, but better, run as a startup. Of all the work required to create Wikipedia, the code is essentially the easy part. Sure, it requires talent, but in terms of raw hours it was probably less than 1% of the time involved making Wikipedia the world’s knowledge base. I wasn’t close enough to observe but I’ve got a pretty strong feeling that it was almost if not entirely staffed with ingroup members, and that outside people who had valuable connections to volunteers prepared to do the grunt work were overlooked because they didn’t have enough “culture fit”.
Miscellanious AI groups—I have no specific opinion on their competence, but I’m against the idea that keeps getting thrown about that goes something like “you can’t say were not acheiving real things, look! we have [list of 10 AI organizations]” research in this area should happen, but when more than half of rationalist organizations are either directly to do with AI or have it as a focus area, it makes us a bit of a one trick pony. Not to mention that AI was explicitly never meant to be the be all and end all of rationalism.
Leverage research—The thing that “leverage” refers to is the meta level, and basically funds people to sit around in a room thinking of insights on 1-3 meta levels, in the hope that one of them someday will have a spark of brilliance and end up finding something that can be applied to make 100 million to bring the whole operation back out of the red. This is fine, and if you have the financial leverage to implement such an insight, it makes sense to fund something like this.
80K hours—This is probably the closest thing to systemized object level craft that exists on a scale that has much impact. Occasionally ends up implying harmful untrue things like “you should quit your non-replaceable-due-to-quotas job as a doctor to become a charity researcher”. A slight shame that they don’t just offer career advice impartially, rather than optimizing getting others to be more altruistic.
EA stuff—good that it exists, and varying levels of opinion of their competence, depending on the org. Meta point here that rational charity evaluation and Xrisk research are not the only areas people can or should do systemized winning in.
First of all, the impression that rationalists take ideas seriously enough that the new information brought to light will cause half of Berkeley to vacate to Manchester is simply untrue. Even if the facts implied that they should move, which they don’t necessarily depending on your goals. This is not how people work, this isn’t even how rationalists work. How many people bothered to systematically and rigarously evaluate moving to Berkeley before doing so? I’d wager less than 10%. So it would make sense that laying out the math to prove they messed up won’t actually persuade the other 90% to leave.
I’d be surprised if more than three people who read this in the ~500 person Berkeley community actually decide to up sticks and move in the coming year.
If there are any women thinking of moving, the reaction is likely because their reaction when reading this essay was something like “OMG this is what I’ve been wanting for years” rather than because I might consider them a real rationalist.
“What does it mean to be a rationalist” is a broad topic and overlaps with questions like “what personality traits do you need to be a rationalist” and I can’t say I have a conclusive answer, but if you want a half formed one, something like:
has read at least 75% of the sequences
strong curiosity about what’s true/strong appetite for knowledge
tries very hard to put emotions aside when truthseeking
contributes towards the advancement of knowledge in some way
Thanks—this is informative and I think it will be useful for anyone trying to decide what to make of your project.
I have disagreements about the “individual woman” example but I’m not sure it’s worth hashing it out, since it gets into some thorny stuff about persuasion/rhetoric that I’m sure we both have strong opinions on.
Regarding MIRI, I want to note that although the organization has certainly become more competently managed, the more recent OpenPhil review included some very interesting and pointed criticism of the technical work, which I’m not sure enough people saw, as it was hidden in a supplemental PDF. Clearly this is not the place to hash out those technical issues, but they are worth noting, since the reviewer objections were more “these results do not move you toward you stated goal in this paper” than “your stated goal is pointless or quixotic,” so if true they are identifying a rationality failure.