It seemed like the hidden second half of the core claim was “and therefore we should coordinate around simpler slogans,” and not the obvious alternative conclusion “and therefore we should scale up more carefully, with an uncompromising emphasis on some aspects of quality control.” (See On the Construction of Beacons for the relevant argument.)
“Scale up more carefully” is a reasonable summary of what I intended to convey, although I meant it more like “here are specific ways you might fuck up if you aren’t careful.” At varying levels of scale, what is actually possible, and why?
FWIW, the motivating example for You Have About Five Words was recent (at the time) EA backlash about the phrase “EA is Talent Constrained”, which many people interpreted to mean “if I’m, like, reasonably talented, EA organizations will need me and hire me”, as opposed to “The EA ecosystem is looking for particular rare talents and skills, and this is more important than funding at the moment.”
The original 80k article was relatively nuanced about this (although re-reading it now I’m not sure it really spells out the particular distinction that’d become a source of frustration. They’d since written an apology/clarification, but it seemed like there was a more general lesson that needed learning, both among EA communicators (and, separately, rationalist communicators) and among people who were trying to keep up with the latest advice/news/thoughts.
The takeaways I meant to be building towards (but, I do recognize now that I didn’t explicitly say this at all and probably should have), were:
If you’re a communicator, make sure the concept you’re communicating degrades gracefully as it loses nuance, (and this is important enough that it should be among the things we hold thought leaders accountable to). Include the nuance, for sure. But some concepts predictably become net-harmful when reduced to their post title, or single-most-salient line.
Water flows downhill, and ideas flow towards simplicity. You can’t fight this, but you can design the contours of the hill around your idea such that it flows towards a simplicity that is useful.
If you’re a person consuming content, pay extra attention to the fact that you, and the people around you, are probably missing nuance by default. This is causing some kinds of double-illusion-of-transparency. Even if communicators are paying attention to the previous point, it’s still a very hard job. Take some responsibility for making sure you understand concepts before propagating them, and if you’re getting angry at a communicator, doublecheck what they actually said first.
this is important enough that it should be among the things we hold thought leaders accountable to
I would say that this depends on what kind of communicator or thought leader we’re talking about. That is, there may be a need for multiple, differently-specialized “communicator” roles.
To the extent that you’re trying to build a mass movement, then I agree completely and without reservations: you’re accountable for the monster spawned by the five-word summary of your manifesto, because pandering to idiots who can’t retain more than five words of nuance is part of the job description of being a movement leader. (If you don’t like the phrase “pandering to idiots”, feel free to charitably pretend I said something else instead; I’m afraid I only have so much time to edit this comment.)
To the extent that you’re actually trying to do serious intellectual work, then no, absolutely not. The job description of an intellectual is, first, to get the theory right, and second, to explain the theory clearly to whosoever has the time and inclination to learn. Those two things are already really hard! To add to these the additional demand that the thinker make sure that her concepts won’t be predictably misunderstood as something allegedly net-harmful by people who don’t have the time and inclination to learn, is just too much of a burden; it can’t be part of the job description of someone whose first duty (on which everything else depends) is to get the theory right.
The tragedy of the so-called “effective altruism” and “rationalist” communities, is that we’re trying to be both mass movements, and intellectually serious, and we didn’t realize until too late in September the extent to which this presents incompatible social-engineering requirements. I’m glad we have people like you thinking about the problem now, though!
(If you don’t like the phrase “pandering to idiots”, feel free to charitably pretend I said something else instead; I’m afraid I only have so much time to edit this comment.)
You know, it’s kind of dishonest of you to appeal to your comment-editing time budget when you really just wanted to express visceral contempt for the idea that intellectuals should be held accountable for alleged harm from simplifications of what they actually said. Like, it didn’t actually take me very much time to generate the phrase “accountability for alleged harm from simplifications” rather than “pandering to idiots”, so comment-editing time can’t have been your real reason for choosing the latter.
More generally: when the intensity of norm enforcement depends on the perceived cost of complying with the norm, people who disagree with the norm (but don’t want to risk defying it openly) face an incentive to exaggerate the costs of compliance. It takes more courage to say, “I meant exactly what I said” when you can plausibly-deniably get away with, “Oh, I’m sorry, that’s just my natural writing style, which would be very expensive for me to change.” But it’s not the expenses—it’s you!
Except you probably won’t understand what I’m trying to say for another three days and nine hours.
I agree that this applies more to mass movements than smaller intellectual groups.
Recall that my claim is “if you’re trying to coordinate with 1/10/100/1000+ people, these are the constraints or causes/effects on how you can communicate (which are different for each scale)”.
It also naively suggest different constraints on EA (which seems a bit more like a mass movement) than LessWrong (which sort of flirted with being a mass movement, but then didn’t really followup on in. It seems to me that the number of ‘serious contributors’ is more like “around 100-200” than “1000+”). And meanwhile, not everyone on LW is actually trying to coordinate with anyone, which is fine.
...
There are some weirder questions that come into play when you’re building a theory about coordination, in public in a space that does coordination. For now, set those aside and focus just on things like, say, developing theories of physics.
If you’re not trying to coordinate with anyone, you can think purely about theory with no cost.
If you’re an intellectual trying to coordinate only with intellectuals who want to follow your work (say, in the ballpark of 10 people), you can expect to have N words worth of shared nuance. (My previous best guess for N is 200,000 words worth, but I don’t strongly stand by that guess)
It is an actual interesting question, for purely intellectual pursuits, whether you get more value out of having a single collaborator that you spend hours each day talking to, vs a larger number of collaborators. You might want to focus on getting your own theory right without regard for other people’s ability to follow you (and if so, you might keep it all to yourself for the time being, or you might post braindumps into a public forum without optimizing it for readability, and let others skim it and see if it’s worth pursuing to them, and then only communicate further with those people if it seems worth it)
But there is an actual benefit to your ability to think, to have other people who can understand what you’re saying so they can critique it (or build off it). This may (or may not) lead you to decide it’s worth putting effort into distillation, so that you can get more eyes reading the thing. (Or, you might grab all the best physicists and put them in a single lab together, where nobody has to spend effort per se on distillation, it just happens naturally as a consequence of conversation)
Again, this is optional. But it’s an open question, even just in the domain of physics, how much you want to try to coordinate with others, and then what strategies that requires.
trying to coordinate with 1/10/100/1000+ people [...] not everyone on LW is actually trying to coordinate with anyone, which is fine.
I wonder if it might be worth writing a separate post explaining why the problems you want to solve with 10/100/1000+ people have the structure of a coordination problem (where it’s important not just that we make good choices, but that we make the same choice), and how much coordination you think is needed?
In World A, everyone has to choose Stag, or the people who chose Stag fail to accomplish anything. The payoff is discontinuous in the number of people choosing Stag: if you can’t solve the coordination problem, you’re stuck with rabbits.
In World B, the stag hunters get a payoff n1.1 stags, where n is the number of people choosing Stag. The payoff is continuous in n: it would be nice if the group was better-coordinated, but it’s usually not worth sacrificing on other goals in order to make the group better-coordinated. We mostly want everyone to be trying their hardest to get the theory of hunting right, rather than making sure that everyone is using the same (possibly less-correct) theory.
I think I mostly perceive myself as living in World B, and tend to be suspicious of people who seem to assume we live in World A without adequately arguing for it (when “Can’t do that, it’s a coordination problem” would be an awfully convenient excuse for choices made for other reasons).
Stag/Rabbit is a simplification (hopefully obvious but worth stating explicitly to avoid accidental motte/bailey-ing). A slightly higher-resolution-simplification:
When it comes to “what norms do we want”, it’s not that you either get all-or-nothing, but if different groups are pushing different norms in the same space, there’s deadweight loss as some people get annoyed at other people for violating their preferred norms, and/or confused about what they’re actually supposed to be doing.
[modeling this out properly and explicitly would take me at least 30 minutes and possibly much longer. Makes more sense to do later on as a post]
Oh, I see; the slightly-higher-resolution version makes a lot more sense to me. When working out the game theory, I would caution that different groups pushing different norms is more like an asymmetric “Battle of the Sexes” problem, which is importantly different from the symmetric Stag Hunt. In Stag Hunt, everyone wants the same thing, and the problem is just about risk-dominance vs. payoff-dominance. In Battle of the Sexes, the problem is about how people who want different things manage to live with each other.
Nod. Yeah that may be a better formulation. I may update the Staghunt post to note this.
“Notice that you’re not actually playing the game you think you’re playing” is maybe a better general rule. (i.e. in the Staghunt article, I was addressing people who think that they’re in a prisoner’s dilemma, but actually they’re in something more like a staghunt. But, yeah, at least some of the time they’re actually in a Battle of the Sexes, or… well, actually in real life it’s always actually some complicated nuanced thing)”
The core takeaway from the Staghunt article that still seems good to me is “if you feel like other people are defecting on your preferred strategy, actually check to see if you can coordinate on your preferred strategy. If it turns out people aren’t just making a basic mistake, you may need to actually convince people your strategy is good (or, learn from them why your strategy is not in fact straightforwardly good.”
I think this (probably?) remains a good strategy in most payoff-variants.
“Scale up more carefully” is a reasonable summary of what I intended to convey, although I meant it more like “here are specific ways you might fuck up if you aren’t careful.” At varying levels of scale, what is actually possible, and why?
FWIW, the motivating example for You Have About Five Words was recent (at the time) EA backlash about the phrase “EA is Talent Constrained”, which many people interpreted to mean “if I’m, like, reasonably talented, EA organizations will need me and hire me”, as opposed to “The EA ecosystem is looking for particular rare talents and skills, and this is more important than funding at the moment.”
The original 80k article was relatively nuanced about this (although re-reading it now I’m not sure it really spells out the particular distinction that’d become a source of frustration. They’d since written an apology/clarification, but it seemed like there was a more general lesson that needed learning, both among EA communicators (and, separately, rationalist communicators) and among people who were trying to keep up with the latest advice/news/thoughts.
The takeaways I meant to be building towards (but, I do recognize now that I didn’t explicitly say this at all and probably should have), were:
If you’re a communicator, make sure the concept you’re communicating degrades gracefully as it loses nuance, (and this is important enough that it should be among the things we hold thought leaders accountable to). Include the nuance, for sure. But some concepts predictably become net-harmful when reduced to their post title, or single-most-salient line.
Water flows downhill, and ideas flow towards simplicity. You can’t fight this, but you can design the contours of the hill around your idea such that it flows towards a simplicity that is useful.
If you’re a person consuming content, pay extra attention to the fact that you, and the people around you, are probably missing nuance by default. This is causing some kinds of double-illusion-of-transparency. Even if communicators are paying attention to the previous point, it’s still a very hard job. Take some responsibility for making sure you understand concepts before propagating them, and if you’re getting angry at a communicator, doublecheck what they actually said first.
I would say that this depends on what kind of communicator or thought leader we’re talking about. That is, there may be a need for multiple, differently-specialized “communicator” roles.
To the extent that you’re trying to build a mass movement, then I agree completely and without reservations: you’re accountable for the monster spawned by the five-word summary of your manifesto, because pandering to idiots who can’t retain more than five words of nuance is part of the job description of being a movement leader. (If you don’t like the phrase “pandering to idiots”, feel free to charitably pretend I said something else instead; I’m afraid I only have so much time to edit this comment.)
To the extent that you’re actually trying to do serious intellectual work, then no, absolutely not. The job description of an intellectual is, first, to get the theory right, and second, to explain the theory clearly to whosoever has the time and inclination to learn. Those two things are already really hard! To add to these the additional demand that the thinker make sure that her concepts won’t be predictably misunderstood as something allegedly net-harmful by people who don’t have the time and inclination to learn, is just too much of a burden; it can’t be part of the job description of someone whose first duty (on which everything else depends) is to get the theory right.
The tragedy of the so-called “effective altruism” and “rationalist” communities, is that we’re trying to be both mass movements, and intellectually serious, and we didn’t realize until too late in September the extent to which this presents incompatible social-engineering requirements. I’m glad we have people like you thinking about the problem now, though!
You know, it’s kind of dishonest of you to appeal to your comment-editing time budget when you really just wanted to express visceral contempt for the idea that intellectuals should be held accountable for alleged harm from simplifications of what they actually said. Like, it didn’t actually take me very much time to generate the phrase “accountability for alleged harm from simplifications” rather than “pandering to idiots”, so comment-editing time can’t have been your real reason for choosing the latter.
More generally: when the intensity of norm enforcement depends on the perceived cost of complying with the norm, people who disagree with the norm (but don’t want to risk defying it openly) face an incentive to exaggerate the costs of compliance. It takes more courage to say, “I meant exactly what I said” when you can plausibly-deniably get away with, “Oh, I’m sorry, that’s just my natural writing style, which would be very expensive for me to change.” But it’s not the expenses—it’s you!
Except you probably won’t understand what I’m trying to say for another three days and nine hours.
I agree that this applies more to mass movements than smaller intellectual groups.
Recall that my claim is “if you’re trying to coordinate with 1/10/100/1000+ people, these are the constraints or causes/effects on how you can communicate (which are different for each scale)”.
It also naively suggest different constraints on EA (which seems a bit more like a mass movement) than LessWrong (which sort of flirted with being a mass movement, but then didn’t really followup on in. It seems to me that the number of ‘serious contributors’ is more like “around 100-200” than “1000+”). And meanwhile, not everyone on LW is actually trying to coordinate with anyone, which is fine.
...
There are some weirder questions that come into play when you’re building a theory about coordination, in public in a space that does coordination. For now, set those aside and focus just on things like, say, developing theories of physics.
If you’re not trying to coordinate with anyone, you can think purely about theory with no cost.
If you’re an intellectual trying to coordinate only with intellectuals who want to follow your work (say, in the ballpark of 10 people), you can expect to have N words worth of shared nuance. (My previous best guess for N is 200,000 words worth, but I don’t strongly stand by that guess)
It is an actual interesting question, for purely intellectual pursuits, whether you get more value out of having a single collaborator that you spend hours each day talking to, vs a larger number of collaborators. You might want to focus on getting your own theory right without regard for other people’s ability to follow you (and if so, you might keep it all to yourself for the time being, or you might post braindumps into a public forum without optimizing it for readability, and let others skim it and see if it’s worth pursuing to them, and then only communicate further with those people if it seems worth it)
But there is an actual benefit to your ability to think, to have other people who can understand what you’re saying so they can critique it (or build off it). This may (or may not) lead you to decide it’s worth putting effort into distillation, so that you can get more eyes reading the thing. (Or, you might grab all the best physicists and put them in a single lab together, where nobody has to spend effort per se on distillation, it just happens naturally as a consequence of conversation)
Again, this is optional. But it’s an open question, even just in the domain of physics, how much you want to try to coordinate with others, and then what strategies that requires.
(Upvoted.)
I wonder if it might be worth writing a separate post explaining why the problems you want to solve with 10/100/1000+ people have the structure of a coordination problem (where it’s important not just that we make good choices, but that we make the same choice), and how much coordination you think is needed?
In World A, everyone has to choose Stag, or the people who chose Stag fail to accomplish anything. The payoff is discontinuous in the number of people choosing Stag: if you can’t solve the coordination problem, you’re stuck with rabbits.
In World B, the stag hunters get a payoff n1.1 stags, where n is the number of people choosing Stag. The payoff is continuous in n: it would be nice if the group was better-coordinated, but it’s usually not worth sacrificing on other goals in order to make the group better-coordinated. We mostly want everyone to be trying their hardest to get the theory of hunting right, rather than making sure that everyone is using the same (possibly less-correct) theory.
I think I mostly perceive myself as living in World B, and tend to be suspicious of people who seem to assume we live in World A without adequately arguing for it (when “Can’t do that, it’s a coordination problem” would be an awfully convenient excuse for choices made for other reasons).
Thanks.
Stag/Rabbit is a simplification (hopefully obvious but worth stating explicitly to avoid accidental motte/bailey-ing). A slightly higher-resolution-simplification:
When it comes to “what norms do we want”, it’s not that you either get all-or-nothing, but if different groups are pushing different norms in the same space, there’s deadweight loss as some people get annoyed at other people for violating their preferred norms, and/or confused about what they’re actually supposed to be doing.
[modeling this out properly and explicitly would take me at least 30 minutes and possibly much longer. Makes more sense to do later on as a post]
Oh, I see; the slightly-higher-resolution version makes a lot more sense to me. When working out the game theory, I would caution that different groups pushing different norms is more like an asymmetric “Battle of the Sexes” problem, which is importantly different from the symmetric Stag Hunt. In Stag Hunt, everyone wants the same thing, and the problem is just about risk-dominance vs. payoff-dominance. In Battle of the Sexes, the problem is about how people who want different things manage to live with each other.
Nod. Yeah that may be a better formulation. I may update the Staghunt post to note this.
“Notice that you’re not actually playing the game you think you’re playing” is maybe a better general rule. (i.e. in the Staghunt article, I was addressing people who think that they’re in a prisoner’s dilemma, but actually they’re in something more like a staghunt. But, yeah, at least some of the time they’re actually in a Battle of the Sexes, or… well, actually in real life it’s always actually some complicated nuanced thing)”
The core takeaway from the Staghunt article that still seems good to me is “if you feel like other people are defecting on your preferred strategy, actually check to see if you can coordinate on your preferred strategy. If it turns out people aren’t just making a basic mistake, you may need to actually convince people your strategy is good (or, learn from them why your strategy is not in fact straightforwardly good.”
I think this (probably?) remains a good strategy in most payoff-variants.