There’s an obvious set of followup work to be done here, which is to ask “Okay, this post was vague poetry meant to roughly illustrate a point. But, how many words do you actually precisely have?” What are the in-depth models that let you predict precisely how much nuance you have to work with?
Less obvious to me is whether this post should become a longer, more rigorous post, or whether it should stay it’s short, poetic self, and have those questions get explored in a different post with different goals.
Also less obvious to me is how the LessWrong Review should relate to short, poetic posts. I think it’s quite important that this post be clearly labeled as poetry, and also, that we consider the work “unfinished” until there is a some kind of post that delves more deeply into these questions. But, for example, I think Babble last year was more like poetry than like a clear model, and it was nonetheless valuable and good to be part of the Best Of book.
So, I’m thinking about this post from two lenses.
What are simple net-improvements I can make to this post, without sacrificing it’s overall aim of being short/accessible/poetic?
Sketch out the research/theory agenda I’d want to see for the more detailed version.
I did just look over the post, and notice that the poetry… wasn’t especially good. There is mild cleverness in having the sections get shorter as they discuss larger coordination-groups. But I think I probably could write a post that was differently poetic. Or, find ways of making a short/accessible version that doesn’t bother being poetic but is is nonetheless clear.
I’m worried about every post having to be a rigorous, fully caveated explanation. That might be the right standard for the review, but not obviously.
Some points that should be made somewhere, whether editing the OP or in a followup:
1. Yes, you can invest in processes that help you reach more people, more reliably.But those tools are effortful to build. Those tools are also limited in their capability. You have to figure out how to incentive people to use the tools.
The point of this post is about the default world where you haven’t built those tools. And if you’re reading this, you almost certainly haven’t. (I think the LessWrong team has made some effort to build such tools, but nonetheless, even here, most people haven’t actually read the sequences. Most people forget that “Politics is the Mind Killer” actually is making a nuanced claim about “don’t use unnecessarily political examples.”)
This post is not about “what is the upper bound”. But it is about what constraints you (yes, you, personally) are probably working under.
2. Unless you have built a system that preserves perfect nuance, as you add more people, you will gradually lose nuance, until that nuance approaches some minimum.
What is that minimum?
Well, if you’re measuring in wordcount, the lowest conceivable bound is… 1 word. That probably doesn’t make much sense. 1 word memes are pretty rare.
I think “2-7 words, but the context is missing” is a typical unit of meme transfer.
“2-7 words with missing nuance” seems like the eventual minimum. This is also consistent with the “working memory hypothesis”, where this bound is specifically based on “how many existing concepts a human can hold together at once.” But, that’s a bit more theoretical and I’m not sure I’d endorse it after thinking about it more.
It occurs to me that you can end up with negative wordcount/nuance, once people start actively garbling your message.
This is actually one of my motivations for the OP, now that I think about it. One way or another, your message will eventually distill down to 2-7 words. If you designed the message to distill gracefully, then you get to pick a message that is at least reasonably aligned with your original intent. If you tried to convey a massive nuanced message, there is a good chance it collapses into something you didn’t intend, as people attempt to distill it for themselves. (Or, adversaries deliberately misrepresent it)
Oh, speaking of which:
3. Adversaries
Some people will actively misrepresent your idea.
4. More people == less competent people
One gear not spelled out in the OP is that as you coordinate with more people, you’re probably coordinating with less competent people. People who are less smart. People with less background knowledge. If you’re running a company, people who aren’t as skilled but maybe juuust skilled enough to be worth hiring anyway. If you’re running a political or religious movement, you’re probably taking on a lot of average joes.
This isn’t necessarily true. You can run a company with 100,000 elite skilled craftsfolk (though it’s hard to find/hire them all without creating a moral maze), or you could have just started with average joes from the beginning so there isn’t actually anywhere less sophisticated to go.
But, this is an additional reason that your nuance will be lost as the coordination effort grows.
5. People just… don’t read, not reliably, not most of the time.
This was meant to be an implied basic background fact, but, some rationalists are surprised by this. People have a ton of things competing for their attention, and they don’t actually sit and read most things. People skim. They read headlines.
Yes, this is even true of rationalists. (I’ve had to tell someone who printed out a lovely Solstice Program that included instructions about Solstice that most people will not read it and they have to include the instructions verbally at the event itself if they want anyone to be able to act on the instructions)
6. Games of Telephone
Most larger-scale coordination effort requires multiple chains of “Alice tells Bob tells Charlie.” Alice tells Bob a reasonably faithful but incomplete version of the thing. Bob tells Charlie a slightly more garbled version. Charlie only bothers repeating the headline to Donna, and when Donna asks for clarifications Charlie only has Bob’s half-remembered explanation, which he garbles further.
Edward only ever hears the headline, and only ever repeats that.
6. People don’t remember that much
People remember stuff when they actually think deeply about it repeatedly (especially if they actually use the concept regularly).
They also remember stuff when they tell their friends about it. They remember stuff when other people remind them.
The Game of Telephone isn’t just relevant for how the message gets distorted. It’s also relevant for how the message gets repeated. If Alice read the entire blogpost and tries to repeat it faithfully.… but then later mostly hears Donna and Edward and Francine repeating the headline, Alice might end up forgetting most of the details and eventually not even remembering that Politics is the Mindkiller was largely about Avoiding Unnecessarily Political Examples.
7. Coordination vs “What concepts most people end up interacting with”
A slightly different take from “how many people you’re coordinating with” is “given how memes spread, most people who interact with your concept will be interacting with a dumbed down version of it, and the people people who know the concept the more likely it is that most of them know the minimal 2-7 word version.”
I’m not sure whether this has implications or not, beyond “be ready for most people to only understand an oversimplified version of your meme.”
8. Human Adversaries and Memetic Adversaries
Sometimes humans actively misrepresent your thing on purpose. Also, sometimes humans accidentally repurpose your thing because they had some more commonly-felt-need, and your square-peg concept was the nearest thing they could shove into a round hole. (See: “Avoiding Jargon Confusion.”)
There’s at least one concept I haven’t written up, because I couldn’t think of a title that wouldn’t automatically get repurposed into a net-harmful bastardization of itself.
Also, people sometimes want your coordination concept to be something that is more convenient for them. (See: “Talent Gaps”, which actually means “EA needs a few highly talented people with very specific qualities” but people re-imagined to mean “I’m pretty talented! EA needs me!” and then later were disappointed about)
9. I have very little idea how the scaling effect actually kicks in. I’m much more confident the eventual limit is 2-7 words than under which circumstances that limit kicks in at.
Future Work
What is the actual upper bound for large groups, if you’re trying? How hard is it to try that hard?
If you make people dedicate their lives to memorizing an extremely detailed Tome, you can probably get a pretty large amount of shared context. The problem comes if you ever need to change the shared context. (See: people memorizing the Bible, but 2000 years later it turns out a lot of the Bible is false or philosophically antiquated)
What’s the most live nuance, that you can have the power to shift over time?
I think this is easiest to get if you have an actual company where people are paid to pay attention. But even then people are pretty busy and lazy. You can start with instilling strong memes about “yes it’s actually really god damn important that we all be aligned and read the memo every morning”, but I think that requires a pretty significant cultural onboarding.
Scott Alexander’s Ars Longa Vita Brevis seems relevant: you can get pretty far if you dedicate a culture to not only figuring out the new concepts to coordinate on, but also learning how to distill new concepts down, address common misconceptions, etc.
Partial Self Review:
There’s an obvious set of followup work to be done here, which is to ask “Okay, this post was vague poetry meant to roughly illustrate a point. But, how many words do you actually precisely have?” What are the in-depth models that let you predict precisely how much nuance you have to work with?
Less obvious to me is whether this post should become a longer, more rigorous post, or whether it should stay it’s short, poetic self, and have those questions get explored in a different post with different goals.
Also less obvious to me is how the LessWrong Review should relate to short, poetic posts. I think it’s quite important that this post be clearly labeled as poetry, and also, that we consider the work “unfinished” until there is a some kind of post that delves more deeply into these questions. But, for example, I think Babble last year was more like poetry than like a clear model, and it was nonetheless valuable and good to be part of the Best Of book.
So, I’m thinking about this post from two lenses.
What are simple net-improvements I can make to this post, without sacrificing it’s overall aim of being short/accessible/poetic?
Sketch out the research/theory agenda I’d want to see for the more detailed version.
I did just look over the post, and notice that the poetry… wasn’t especially good. There is mild cleverness in having the sections get shorter as they discuss larger coordination-groups. But I think I probably could write a post that was differently poetic. Or, find ways of making a short/accessible version that doesn’t bother being poetic but is is nonetheless clear.
I’m worried about every post having to be a rigorous, fully caveated explanation. That might be the right standard for the review, but not obviously.
Some points that should be made somewhere, whether editing the OP or in a followup:
1. Yes, you can invest in processes that help you reach more people, more reliably. But those tools are effortful to build. Those tools are also limited in their capability. You have to figure out how to incentive people to use the tools.
The point of this post is about the default world where you haven’t built those tools. And if you’re reading this, you almost certainly haven’t. (I think the LessWrong team has made some effort to build such tools, but nonetheless, even here, most people haven’t actually read the sequences. Most people forget that “Politics is the Mind Killer” actually is making a nuanced claim about “don’t use unnecessarily political examples.”)
This post is not about “what is the upper bound”. But it is about what constraints you (yes, you, personally) are probably working under.
2. Unless you have built a system that preserves perfect nuance, as you add more people, you will gradually lose nuance, until that nuance approaches some minimum.
What is that minimum?
Well, if you’re measuring in wordcount, the lowest conceivable bound is… 1 word. That probably doesn’t make much sense. 1 word memes are pretty rare.
I think “2-7 words, but the context is missing” is a typical unit of meme transfer.
“2-7 words with missing nuance” seems like the eventual minimum. This is also consistent with the “working memory hypothesis”, where this bound is specifically based on “how many existing concepts a human can hold together at once.” But, that’s a bit more theoretical and I’m not sure I’d endorse it after thinking about it more.
It occurs to me that you can end up with negative wordcount/nuance, once people start actively garbling your message.
This is actually one of my motivations for the OP, now that I think about it. One way or another, your message will eventually distill down to 2-7 words. If you designed the message to distill gracefully, then you get to pick a message that is at least reasonably aligned with your original intent. If you tried to convey a massive nuanced message, there is a good chance it collapses into something you didn’t intend, as people attempt to distill it for themselves. (Or, adversaries deliberately misrepresent it)
Oh, speaking of which:
3. Adversaries
Some people will actively misrepresent your idea.
4. More people == less competent people
One gear not spelled out in the OP is that as you coordinate with more people, you’re probably coordinating with less competent people. People who are less smart. People with less background knowledge. If you’re running a company, people who aren’t as skilled but maybe juuust skilled enough to be worth hiring anyway. If you’re running a political or religious movement, you’re probably taking on a lot of average joes.
This isn’t necessarily true. You can run a company with 100,000 elite skilled craftsfolk (though it’s hard to find/hire them all without creating a moral maze), or you could have just started with average joes from the beginning so there isn’t actually anywhere less sophisticated to go.
But, this is an additional reason that your nuance will be lost as the coordination effort grows.
5. People just… don’t read, not reliably, not most of the time.
This was meant to be an implied basic background fact, but, some rationalists are surprised by this. People have a ton of things competing for their attention, and they don’t actually sit and read most things. People skim. They read headlines.
Yes, this is even true of rationalists. (I’ve had to tell someone who printed out a lovely Solstice Program that included instructions about Solstice that most people will not read it and they have to include the instructions verbally at the event itself if they want anyone to be able to act on the instructions)
6. Games of Telephone
Most larger-scale coordination effort requires multiple chains of “Alice tells Bob tells Charlie.” Alice tells Bob a reasonably faithful but incomplete version of the thing. Bob tells Charlie a slightly more garbled version. Charlie only bothers repeating the headline to Donna, and when Donna asks for clarifications Charlie only has Bob’s half-remembered explanation, which he garbles further.
Edward only ever hears the headline, and only ever repeats that.
6. People don’t remember that much
People remember stuff when they actually think deeply about it repeatedly (especially if they actually use the concept regularly).
They also remember stuff when they tell their friends about it. They remember stuff when other people remind them.
The Game of Telephone isn’t just relevant for how the message gets distorted. It’s also relevant for how the message gets repeated. If Alice read the entire blogpost and tries to repeat it faithfully.… but then later mostly hears Donna and Edward and Francine repeating the headline, Alice might end up forgetting most of the details and eventually not even remembering that Politics is the Mindkiller was largely about Avoiding Unnecessarily Political Examples.
7. Coordination vs “What concepts most people end up interacting with”
A slightly different take from “how many people you’re coordinating with” is “given how memes spread, most people who interact with your concept will be interacting with a dumbed down version of it, and the people people who know the concept the more likely it is that most of them know the minimal 2-7 word version.”
I’m not sure whether this has implications or not, beyond “be ready for most people to only understand an oversimplified version of your meme.”
8. Human Adversaries and Memetic Adversaries
Sometimes humans actively misrepresent your thing on purpose. Also, sometimes humans accidentally repurpose your thing because they had some more commonly-felt-need, and your square-peg concept was the nearest thing they could shove into a round hole. (See: “Avoiding Jargon Confusion.”)
There’s at least one concept I haven’t written up, because I couldn’t think of a title that wouldn’t automatically get repurposed into a net-harmful bastardization of itself.
Also, people sometimes want your coordination concept to be something that is more convenient for them. (See: “Talent Gaps”, which actually means “EA needs a few highly talented people with very specific qualities” but people re-imagined to mean “I’m pretty talented! EA needs me!” and then later were disappointed about)
9. I have very little idea how the scaling effect actually kicks in. I’m much more confident the eventual limit is 2-7 words than under which circumstances that limit kicks in at.
Future Work
What is the actual upper bound for large groups, if you’re trying? How hard is it to try that hard?
If you make people dedicate their lives to memorizing an extremely detailed Tome, you can probably get a pretty large amount of shared context. The problem comes if you ever need to change the shared context. (See: people memorizing the Bible, but 2000 years later it turns out a lot of the Bible is false or philosophically antiquated)
What’s the most live nuance, that you can have the power to shift over time?
I think this is easiest to get if you have an actual company where people are paid to pay attention. But even then people are pretty busy and lazy. You can start with instilling strong memes about “yes it’s actually really god damn important that we all be aligned and read the memo every morning”, but I think that requires a pretty significant cultural onboarding.
Scott Alexander’s Ars Longa Vita Brevis seems relevant: you can get pretty far if you dedicate a culture to not only figuring out the new concepts to coordinate on, but also learning how to distill new concepts down, address common misconceptions, etc.