Initial approach: aiming for a descending depth first brainstorm rather than breadth first.
Most of this turned out to explore the question of what happens when organizations scale, and assumed a focus on “groups of 10 or so that are trying to slowly scale”
Background assumptions
As else-noted, a company, or other explicit organization, has a major leg-up against “a random diffuse community”, and the problem with the self-described “rationality community” is that it has no actual boundaries or goals.
If you are working with a diffuse-group, I think step 1 is to refactor it into something that has boundaries and goals that can be explicitly discussed/agreed upon. (My preferred way to do this is to plant a flag and create a subspace that says “this subspace has specific goals and higher standards in service of those goals.”)
Assuming You’ve Somehow Got a Group Capable-In-Principle of Agreeing Upon Goals
Decide which of your Open Problems matter—I think it might be worth taking stock of the various known-problems in group dynamics (or ones you can brainstorm), with a given group, and deciding which of them you’re okay you actually try to experiment or improve the state-of-the-art of, and which you’re just going to be like “okay, it seems like some similar-reference-class group has pretty good standards and we’ll copy those”.
I’d expect “looser groups designed for generic progress” to end up having different issues and pressures than “explicit organizations aiming to accomplish a goal.”
[Note: at first I meant the previous sentence as a statement, and upon reflection it is more of a _prediction_ and I’m less certain it’s correct, especially depending on what sort of ‘loose group’ you have going on]
Zero to Ten vs 150 to A Thousand
There’s an initial problem you face of getting the right people together to seed the culture. There’s a (I think reasonably well understood??) problem of growing that culture—slowly, deliberately, so that at each step the culture has time to reinforce itself before adding more people.
I have a vague sense that there’s a quantum-shift when you go from “the Leader(s) know everybody, to super-dunbar-number”.
For all the open problems you list, are you are you more worried about seeding your first 10 people, growing them to 150, or scaling past that?
Finding People vs Training People
I think the EA community is struggling because it seems like you need people who are excellent along a number of dimensions:
actually good at their job
have strong epistemics
have strong ability to cooperate/coordinate
value alignment
And there basically aren’t enough people who succeed at all four things, so you have to make tradeoffs.
Growing from 10 − 50
I think most groups Connor might be involved with, the issue currently at stake is “establishing the first 10 or so people, followed by slowly expanding” (I don’t know of any EA or rationality groups, professional or otherwise, that have explicit goals you expect people to cooperate on, or are larger than 50 people)
If groups of 10ish have been failing to exhibit group rationality, I think it’s a plausibly good strategy to either focus only on strengthening the group rationality of the existing group, or to find some compromise on “what’s a reasonable level of rationality we can share that is enough that to start scaling”
I do think, even if focused strongly on the “make sure the existing group is strong before scaling”, that there should be some kind of thought given to when and how to scale. I think most groups do eventually need to be big to accomplish things, and trying to optimize a tiny group of people can be similar to a “Meta Trap” where you’re never satisfied enough to move on to Stage 2.
(Unless you have a specific task you’re trying to do, and the task is clearly optimized for the number of people you have)
The First Growing Pain
I suspect there’s a quantum shift, before reaching Dunbar’s number, which is “the hierarchy is more than 2 levels deep”, which happens around 50-75 people, which will put strain on the group, and a bunch of Brent-Dill-esque concerns will get further exacerbated.
Pre-First-Growing-Pain
Assuming you’re still at 1 or 2 levels of hierarchy, I assume the Hanson/Dill/Vassar/Rao style concerns of “people respond to local social incentives leading to warped behavior, politicing, working at cross purposes” thing will still be relevant. I’m mostly aware of a bunch of things not to do or ways in which things will fail by default.
The zeroth level problem is maybe people not even agreeing on how to even pretend to handle that sort of thing. Common solutions:
- pretend it’s not a thing, rely on a shared Guess Culture with unwritten rules, and filter for people with the right set of assumptions on what those rules are.
- make sure everyone is at least about to acknowledge that Social Reality is a thing so that they can talk about it explicitly. (This doesn’t necessarily presume any particular manner of resolving issues, could still be guess culture, but informed guess culture is… well actually I don’t know if it’s better or wrose)
Transparency, Lackthereof, Stress
I have a vague sense that there’s a failure mode among orgs within and a couple degrees removed from the rationalsphere (idealistic startups of a certain bent. Bridgewater is an example), where the founders try for:
a) self improvement b) transparency c) optional: care about people’s feelings (but not necessarily have any skill at doing so), and try to resolve things thoroughly
And this gets bad because of:
focus on self improvement and transparency results in a lot of criticism, which is hard to do right and leads to lots of stress
resolving criticism and dealing with feelings is really exhausting
because people aren’t actually very good at dealing with feelings, everyone quickly learns that the caring about feelings isn’t For Real, but they have to pretend that they’re doing it, which is even more exhausting and perhaps threatening
The transparency is “real” up until the moment when management has anything important that they need to hide from employees, at which point it rapidly becomes “employees are forced into transparency, management is transparent when convenient.”
(I think it’s probably better to straight-up say “management is going to try to be transparent when convenient” from beginning)
transparency mixed with criticism makes everything worse, even more stressful
Epistemic Status: brainstorm
Initial approach: aiming for a descending depth first brainstorm rather than breadth first.
Most of this turned out to explore the question of what happens when organizations scale, and assumed a focus on “groups of 10 or so that are trying to slowly scale”
Background assumptions
As else-noted, a company, or other explicit organization, has a major leg-up against “a random diffuse community”, and the problem with the self-described “rationality community” is that it has no actual boundaries or goals.
If you are working with a diffuse-group, I think step 1 is to refactor it into something that has boundaries and goals that can be explicitly discussed/agreed upon. (My preferred way to do this is to plant a flag and create a subspace that says “this subspace has specific goals and higher standards in service of those goals.”)
Assuming You’ve Somehow Got a Group Capable-In-Principle of Agreeing Upon Goals
Decide which of your Open Problems matter—I think it might be worth taking stock of the various known-problems in group dynamics (or ones you can brainstorm), with a given group, and deciding which of them you’re okay you actually try to experiment or improve the state-of-the-art of, and which you’re just going to be like “okay, it seems like some similar-reference-class group has pretty good standards and we’ll copy those”.
I’d expect “looser groups designed for generic progress” to end up having different issues and pressures than “explicit organizations aiming to accomplish a goal.”
[Note: at first I meant the previous sentence as a statement, and upon reflection it is more of a _prediction_ and I’m less certain it’s correct, especially depending on what sort of ‘loose group’ you have going on]
Zero to Ten vs 150 to A Thousand
There’s an initial problem you face of getting the right people together to seed the culture. There’s a (I think reasonably well understood??) problem of growing that culture—slowly, deliberately, so that at each step the culture has time to reinforce itself before adding more people.
I have a vague sense that there’s a quantum-shift when you go from “the Leader(s) know everybody, to super-dunbar-number”.
For all the open problems you list, are you are you more worried about seeding your first 10 people, growing them to 150, or scaling past that?
Finding People vs Training People
I think the EA community is struggling because it seems like you need people who are excellent along a number of dimensions:
actually good at their job
have strong epistemics
have strong ability to cooperate/coordinate
value alignment
And there basically aren’t enough people who succeed at all four things, so you have to make tradeoffs.
Growing from 10 − 50
I think most groups Connor might be involved with, the issue currently at stake is “establishing the first 10 or so people, followed by slowly expanding” (I don’t know of any EA or rationality groups, professional or otherwise, that have explicit goals you expect people to cooperate on, or are larger than 50 people)
If groups of 10ish have been failing to exhibit group rationality, I think it’s a plausibly good strategy to either focus only on strengthening the group rationality of the existing group, or to find some compromise on “what’s a reasonable level of rationality we can share that is enough that to start scaling”
I do think, even if focused strongly on the “make sure the existing group is strong before scaling”, that there should be some kind of thought given to when and how to scale. I think most groups do eventually need to be big to accomplish things, and trying to optimize a tiny group of people can be similar to a “Meta Trap” where you’re never satisfied enough to move on to Stage 2.
(Unless you have a specific task you’re trying to do, and the task is clearly optimized for the number of people you have)
The First Growing Pain
I suspect there’s a quantum shift, before reaching Dunbar’s number, which is “the hierarchy is more than 2 levels deep”, which happens around 50-75 people, which will put strain on the group, and a bunch of Brent-Dill-esque concerns will get further exacerbated.
Pre-First-Growing-Pain
Assuming you’re still at 1 or 2 levels of hierarchy, I assume the Hanson/Dill/Vassar/Rao style concerns of “people respond to local social incentives leading to warped behavior, politicing, working at cross purposes” thing will still be relevant. I’m mostly aware of a bunch of things not to do or ways in which things will fail by default.
The zeroth level problem is maybe people not even agreeing on how to even pretend to handle that sort of thing. Common solutions:
- pretend it’s not a thing, rely on a shared Guess Culture with unwritten rules, and filter for people with the right set of assumptions on what those rules are.
- make sure everyone is at least about to acknowledge that Social Reality is a thing so that they can talk about it explicitly. (This doesn’t necessarily presume any particular manner of resolving issues, could still be guess culture, but informed guess culture is… well actually I don’t know if it’s better or wrose)
Transparency, Lackthereof, Stress
I have a vague sense that there’s a failure mode among orgs within and a couple degrees removed from the rationalsphere (idealistic startups of a certain bent. Bridgewater is an example), where the founders try for:
a) self improvement
b) transparency
c) optional: care about people’s feelings (but not necessarily have any skill at doing so), and try to resolve things thoroughly
And this gets bad because of:
focus on self improvement and transparency results in a lot of criticism, which is hard to do right and leads to lots of stress
resolving criticism and dealing with feelings is really exhausting
because people aren’t actually very good at dealing with feelings, everyone quickly learns that the caring about feelings isn’t For Real, but they have to pretend that they’re doing it, which is even more exhausting and perhaps threatening
The transparency is “real” up until the moment when management has anything important that they need to hide from employees, at which point it rapidly becomes “employees are forced into transparency, management is transparent when convenient.”
(I think it’s probably better to straight-up say “management is going to try to be transparent when convenient” from beginning)
transparency mixed with criticism makes everything worse, even more stressful
That’s about what I have time for for now