I agree that this applies more to mass movements than smaller intellectual groups.
Recall that my claim is “if you’re trying to coordinate with 1/10/100/1000+ people, these are the constraints or causes/effects on how you can communicate (which are different for each scale)”.
It also naively suggest different constraints on EA (which seems a bit more like a mass movement) than LessWrong (which sort of flirted with being a mass movement, but then didn’t really followup on in. It seems to me that the number of ‘serious contributors’ is more like “around 100-200” than “1000+”). And meanwhile, not everyone on LW is actually trying to coordinate with anyone, which is fine.
...
There are some weirder questions that come into play when you’re building a theory about coordination, in public in a space that does coordination. For now, set those aside and focus just on things like, say, developing theories of physics.
If you’re not trying to coordinate with anyone, you can think purely about theory with no cost.
If you’re an intellectual trying to coordinate only with intellectuals who want to follow your work (say, in the ballpark of 10 people), you can expect to have N words worth of shared nuance. (My previous best guess for N is 200,000 words worth, but I don’t strongly stand by that guess)
It is an actual interesting question, for purely intellectual pursuits, whether you get more value out of having a single collaborator that you spend hours each day talking to, vs a larger number of collaborators. You might want to focus on getting your own theory right without regard for other people’s ability to follow you (and if so, you might keep it all to yourself for the time being, or you might post braindumps into a public forum without optimizing it for readability, and let others skim it and see if it’s worth pursuing to them, and then only communicate further with those people if it seems worth it)
But there is an actual benefit to your ability to think, to have other people who can understand what you’re saying so they can critique it (or build off it). This may (or may not) lead you to decide it’s worth putting effort into distillation, so that you can get more eyes reading the thing. (Or, you might grab all the best physicists and put them in a single lab together, where nobody has to spend effort per se on distillation, it just happens naturally as a consequence of conversation)
Again, this is optional. But it’s an open question, even just in the domain of physics, how much you want to try to coordinate with others, and then what strategies that requires.
trying to coordinate with 1/10/100/1000+ people [...] not everyone on LW is actually trying to coordinate with anyone, which is fine.
I wonder if it might be worth writing a separate post explaining why the problems you want to solve with 10/100/1000+ people have the structure of a coordination problem (where it’s important not just that we make good choices, but that we make the same choice), and how much coordination you think is needed?
In World A, everyone has to choose Stag, or the people who chose Stag fail to accomplish anything. The payoff is discontinuous in the number of people choosing Stag: if you can’t solve the coordination problem, you’re stuck with rabbits.
In World B, the stag hunters get a payoff n1.1 stags, where n is the number of people choosing Stag. The payoff is continuous in n: it would be nice if the group was better-coordinated, but it’s usually not worth sacrificing on other goals in order to make the group better-coordinated. We mostly want everyone to be trying their hardest to get the theory of hunting right, rather than making sure that everyone is using the same (possibly less-correct) theory.
I think I mostly perceive myself as living in World B, and tend to be suspicious of people who seem to assume we live in World A without adequately arguing for it (when “Can’t do that, it’s a coordination problem” would be an awfully convenient excuse for choices made for other reasons).
Stag/Rabbit is a simplification (hopefully obvious but worth stating explicitly to avoid accidental motte/bailey-ing). A slightly higher-resolution-simplification:
When it comes to “what norms do we want”, it’s not that you either get all-or-nothing, but if different groups are pushing different norms in the same space, there’s deadweight loss as some people get annoyed at other people for violating their preferred norms, and/or confused about what they’re actually supposed to be doing.
[modeling this out properly and explicitly would take me at least 30 minutes and possibly much longer. Makes more sense to do later on as a post]
Oh, I see; the slightly-higher-resolution version makes a lot more sense to me. When working out the game theory, I would caution that different groups pushing different norms is more like an asymmetric “Battle of the Sexes” problem, which is importantly different from the symmetric Stag Hunt. In Stag Hunt, everyone wants the same thing, and the problem is just about risk-dominance vs. payoff-dominance. In Battle of the Sexes, the problem is about how people who want different things manage to live with each other.
Nod. Yeah that may be a better formulation. I may update the Staghunt post to note this.
“Notice that you’re not actually playing the game you think you’re playing” is maybe a better general rule. (i.e. in the Staghunt article, I was addressing people who think that they’re in a prisoner’s dilemma, but actually they’re in something more like a staghunt. But, yeah, at least some of the time they’re actually in a Battle of the Sexes, or… well, actually in real life it’s always actually some complicated nuanced thing)”
The core takeaway from the Staghunt article that still seems good to me is “if you feel like other people are defecting on your preferred strategy, actually check to see if you can coordinate on your preferred strategy. If it turns out people aren’t just making a basic mistake, you may need to actually convince people your strategy is good (or, learn from them why your strategy is not in fact straightforwardly good.”
I think this (probably?) remains a good strategy in most payoff-variants.
I agree that this applies more to mass movements than smaller intellectual groups.
Recall that my claim is “if you’re trying to coordinate with 1/10/100/1000+ people, these are the constraints or causes/effects on how you can communicate (which are different for each scale)”.
It also naively suggest different constraints on EA (which seems a bit more like a mass movement) than LessWrong (which sort of flirted with being a mass movement, but then didn’t really followup on in. It seems to me that the number of ‘serious contributors’ is more like “around 100-200” than “1000+”). And meanwhile, not everyone on LW is actually trying to coordinate with anyone, which is fine.
...
There are some weirder questions that come into play when you’re building a theory about coordination, in public in a space that does coordination. For now, set those aside and focus just on things like, say, developing theories of physics.
If you’re not trying to coordinate with anyone, you can think purely about theory with no cost.
If you’re an intellectual trying to coordinate only with intellectuals who want to follow your work (say, in the ballpark of 10 people), you can expect to have N words worth of shared nuance. (My previous best guess for N is 200,000 words worth, but I don’t strongly stand by that guess)
It is an actual interesting question, for purely intellectual pursuits, whether you get more value out of having a single collaborator that you spend hours each day talking to, vs a larger number of collaborators. You might want to focus on getting your own theory right without regard for other people’s ability to follow you (and if so, you might keep it all to yourself for the time being, or you might post braindumps into a public forum without optimizing it for readability, and let others skim it and see if it’s worth pursuing to them, and then only communicate further with those people if it seems worth it)
But there is an actual benefit to your ability to think, to have other people who can understand what you’re saying so they can critique it (or build off it). This may (or may not) lead you to decide it’s worth putting effort into distillation, so that you can get more eyes reading the thing. (Or, you might grab all the best physicists and put them in a single lab together, where nobody has to spend effort per se on distillation, it just happens naturally as a consequence of conversation)
Again, this is optional. But it’s an open question, even just in the domain of physics, how much you want to try to coordinate with others, and then what strategies that requires.
(Upvoted.)
I wonder if it might be worth writing a separate post explaining why the problems you want to solve with 10/100/1000+ people have the structure of a coordination problem (where it’s important not just that we make good choices, but that we make the same choice), and how much coordination you think is needed?
In World A, everyone has to choose Stag, or the people who chose Stag fail to accomplish anything. The payoff is discontinuous in the number of people choosing Stag: if you can’t solve the coordination problem, you’re stuck with rabbits.
In World B, the stag hunters get a payoff n1.1 stags, where n is the number of people choosing Stag. The payoff is continuous in n: it would be nice if the group was better-coordinated, but it’s usually not worth sacrificing on other goals in order to make the group better-coordinated. We mostly want everyone to be trying their hardest to get the theory of hunting right, rather than making sure that everyone is using the same (possibly less-correct) theory.
I think I mostly perceive myself as living in World B, and tend to be suspicious of people who seem to assume we live in World A without adequately arguing for it (when “Can’t do that, it’s a coordination problem” would be an awfully convenient excuse for choices made for other reasons).
Thanks.
Stag/Rabbit is a simplification (hopefully obvious but worth stating explicitly to avoid accidental motte/bailey-ing). A slightly higher-resolution-simplification:
When it comes to “what norms do we want”, it’s not that you either get all-or-nothing, but if different groups are pushing different norms in the same space, there’s deadweight loss as some people get annoyed at other people for violating their preferred norms, and/or confused about what they’re actually supposed to be doing.
[modeling this out properly and explicitly would take me at least 30 minutes and possibly much longer. Makes more sense to do later on as a post]
Oh, I see; the slightly-higher-resolution version makes a lot more sense to me. When working out the game theory, I would caution that different groups pushing different norms is more like an asymmetric “Battle of the Sexes” problem, which is importantly different from the symmetric Stag Hunt. In Stag Hunt, everyone wants the same thing, and the problem is just about risk-dominance vs. payoff-dominance. In Battle of the Sexes, the problem is about how people who want different things manage to live with each other.
Nod. Yeah that may be a better formulation. I may update the Staghunt post to note this.
“Notice that you’re not actually playing the game you think you’re playing” is maybe a better general rule. (i.e. in the Staghunt article, I was addressing people who think that they’re in a prisoner’s dilemma, but actually they’re in something more like a staghunt. But, yeah, at least some of the time they’re actually in a Battle of the Sexes, or… well, actually in real life it’s always actually some complicated nuanced thing)”
The core takeaway from the Staghunt article that still seems good to me is “if you feel like other people are defecting on your preferred strategy, actually check to see if you can coordinate on your preferred strategy. If it turns out people aren’t just making a basic mistake, you may need to actually convince people your strategy is good (or, learn from them why your strategy is not in fact straightforwardly good.”
I think this (probably?) remains a good strategy in most payoff-variants.