You Get About Five Words
Cross posted from the EA Forum.
Epistemic Status: all numbers are made up and/or sketchily sourced. Post errs on the side of simplistic poetry – take seriously but not literally.
If you want to coordinate with one person on a thing about something nuanced, you can spend as much time as you want talking to them – answering questions in realtime, addressing confusions as you notice them. You can trust them to go off and attempt complex tasks without as much oversight, and you can decide to change your collective plans quickly and nimbly.
You probably speak at around 100 words per minute. That’s 6,000 words per hour. If you talk for 3 hours a day, every workday for a year, you can communicate 4.3 million words worth of nuance.
You can have a real conversation with up to 4 people.
(Last year the small organization I work at considered hiring a 5th person. It turned out to be very costly and we decided to wait, and I think the reasons were related to this phenomenon)
If you want to coordinate on something nuanced with, say, 10 people, you realistically can ask them to read a couple books worth of words. A book is maybe 50,000 words, so you have maybe 200,000 words worth of nuance.
Alternately, you can monologue at people, scaling a conversation past the point where people realistically can ask questions. Either way, you need to hope that your books or your monologues happen to address the particular confusions your 10 teammates have.
If you want to coordinate with 100 people, you can ask them to read a few books, but chances are they won’t. They might all read a few books worth of stuff, but they won’t all have read the same books. The information that they can be coordinated around is more like “several blogposts.” If you’re trying to coordinate nerds, maybe those blogposts add up to one book because nerds like to read.
If you want to coordinate 1,000 people… you realistically get one blogpost, or maybe one blogpost worth of jargon that’s hopefully self-explanatory enough to be useful.
If you want to coordinate thousands of people...
You have about five words.
This has ramifications on how complicated a coordinated effort you can attempt.
What if you need all that nuance and to coordinate thousands of people? What would it look like if the world was filled with complicated problems that required lots of people to solve?
I guess it’d look like this one.
- Epistemic Legibility by 9 Feb 2022 18:10 UTC; 305 points) (
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by 24 Sep 2019 4:12 UTC; 299 points) (
- Covid 12/24: We’re F***ed, It’s Over by 24 Dec 2020 15:10 UTC; 276 points) (
- “Carefully Bootstrapped Alignment” is organizationally hard by 17 Mar 2023 18:00 UTC; 261 points) (
- The 101 Space You Will Always Have With You by 29 Nov 2023 4:56 UTC; 250 points) (
- Holly Elmore and Rob Miles dialogue on AI Safety Advocacy by 20 Oct 2023 21:04 UTC; 162 points) (
- Most People Don’t Realize We Have No Idea How Our AIs Work by 21 Dec 2023 20:02 UTC; 158 points) (
- Dan Luu on “You can only communicate one top priority” by 18 Mar 2023 18:55 UTC; 148 points) (
- The Relationship Between the Village and the Mission by 12 May 2019 21:09 UTC; 137 points) (
- Interpretability/Tool-ness/Alignment/Corrigibility are not Composable by 8 Aug 2022 18:05 UTC; 130 points) (
- Avoid Unnecessarily Political Examples by 11 Jan 2021 5:41 UTC; 106 points) (
- Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by 29 Jun 2020 2:45 UTC; 105 points) (
- Aim for conditional pauses by 25 Sep 2023 1:05 UTC; 100 points) (EA Forum;
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- Taking Initial Viral Load Seriously by 1 Apr 2020 10:50 UTC; 94 points) (
- Partial summary of debate with Benquo and Jessicata [pt 1] by 14 Aug 2019 20:02 UTC; 89 points) (
- Epistemic Legibility by 21 Mar 2022 19:18 UTC; 73 points) (EA Forum;
- Covid 11/19: Don’t Do Stupid Things by 19 Nov 2020 16:00 UTC; 72 points) (
- Robust Agency for People and Organizations by 19 Jul 2019 1:18 UTC; 65 points) (
- Don’t sleep on Coordination Takeoffs by 27 Jan 2024 19:55 UTC; 62 points) (
- Schelling Categories, and Simple Membership Tests by 26 Aug 2019 2:43 UTC; 58 points) (
- Covid 12/10: Vaccine Approval Day in America by 10 Dec 2020 14:20 UTC; 58 points) (
- Schelling points in the AGI policy space by 26 Jun 2024 13:19 UTC; 52 points) (
- Power vs Precision by 16 Aug 2021 18:34 UTC; 48 points) (
- Upgrading the AI Safety Community by 16 Dec 2023 15:34 UTC; 42 points) (
- Extrapolating from Five Words by 15 Nov 2023 23:21 UTC; 40 points) (
- You get one story detail by 5 Apr 2022 4:38 UTC; 37 points) (
- A Common-Sense Case For Mutually-Misaligned AGIs Allying Against Humans by 17 Dec 2023 20:28 UTC; 29 points) (
- 15 Jul 2019 21:24 UTC; 26 points) 's comment on “Rationalizing” and “Sitting Bolt Upright in Alarm.” by (
- 16 Mar 2022 18:43 UTC; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- 29 Oct 2021 0:18 UTC; 18 points) 's comment on Ruling Out Everything Else by (
- Thoughts on Moral Philosophy by 17 Aug 2021 12:57 UTC; 17 points) (
- 29 May 2019 16:59 UTC; 16 points) 's comment on Drowning children are rare by (
- 30 Jan 2021 1:18 UTC; 14 points) 's comment on The GameStop Situation: Simplified by (
- 29 May 2019 5:06 UTC; 14 points) 's comment on Drowning children are rare by (
- 13 Mar 2019 15:47 UTC; 13 points) 's comment on “AlphaStar: Mastering the Real-Time Strategy Game StarCraft II”, DeepMind [won 10 of 11 games against human pros] by (
- 27 Feb 2024 21:25 UTC; 11 points) 's comment on We Need Major, But Not Radical, FDA Reform by (
- 21 Dec 2021 13:33 UTC; 7 points) 's comment on High School Seniors React to 80k Advice by (EA Forum;
- 26 Oct 2023 1:45 UTC; 7 points) 's comment on Thoughts on responsible scaling policies and regulation by (
- 24 Sep 2024 6:27 UTC; 7 points) 's comment on What are the best arguments for/against AIs being “slightly ‘nice’”? by (
- 9 Dec 2020 5:04 UTC; 6 points) 's comment on alkjash’s Shortform by (
- 3 Jun 2023 13:21 UTC; 5 points) 's comment on Interpretability/Tool-ness/Alignment/Corrigibility are not Composable by (
- 30 Dec 2020 3:59 UTC; 5 points) 's comment on Review Voting Thread by (
- 29 Jul 2023 19:12 UTC; 4 points) 's comment on Rationality !== Winning by (
- 26 Apr 2023 16:08 UTC; 4 points) 's comment on My Assessment of the Chinese AI Safety Community by (
- 25 Sep 2021 18:52 UTC; 4 points) 's comment on Shared Frames Are Capital Investments in Coordination by (
- 16 Jul 2023 22:59 UTC; 4 points) 's comment on A Hill of Validity in Defense of Meaning by (
- 15 Jan 2021 1:03 UTC; 4 points) 's comment on Covid 1/14: To Launch a Thousand Shipments by (
- 2 Sep 2021 17:15 UTC; 3 points) 's comment on What 2026 looks like by (
- 11 Dec 2023 10:10 UTC; 3 points) 's comment on How LDT helps reduce the AI arms race by (
- 29 Nov 2023 15:55 UTC; 2 points) 's comment on Chapter 4 of The Precipice in poem form by (EA Forum;
- 14 Jul 2023 22:47 UTC; 2 points) 's comment on Alignment Megaprojects: You’re Not Even Trying to Have Ideas by (
- 5 Dec 2023 21:57 UTC; 2 points) 's comment on We’re all in this together by (
- 1 Aug 2019 19:14 UTC; 2 points) 's comment on jacobjacob’s Shortform Feed by (
- 30 Jun 2020 17:32 UTC; 2 points) 's comment on Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by (
- 7 May 2024 5:23 UTC; 2 points) 's comment on Biorisk is an Unhelpful Analogy for AI Risk by (
- 5 May 2022 18:24 UTC; 2 points) 's comment on Narrative Syncing by (
- 26 Sep 2023 9:52 UTC; 1 point) 's comment on Far-Future Commitments as a Policy Consensus Strategy by (
- Is Kennedy a Nazi? by 31 Jul 2023 8:51 UTC; -12 points) (
I use this concept often, including explicitly thinking about what (about) five words I want to be the takeaway or that would deliver the payload, or that I expect to be the takeaway from something. I also think I’ve linked to it quite a few times.
I’ve also used it to remind people that what they are doing won’t work because they’re trying to communicate too much content through a medium that does not allow it.
A central problem is how to create building blocks that have a lot more than five words, but where the five words in each block can do a reasonable substitute job when needed.
As an additional data point, a link to this post will appear in the 12⁄10 Covid weekly roundup.
This is pretty cool. Can you give some example of about five word takeaways you’ve created for different contexts?
Here are some attempted takeaways for things I’ve written, some of which were explicit at the time, some of which were implicit:
Covid-19: “Outside, social distance, wear mask.”
Simulacra (for different posts/models): “Truth, lies, signals, strategic moves” or “level manipulates/dominates level below” or “abstractions dominate, then system collapses”
Mazes: “Modern large organizations are toxic” or “middle management destroys your soul”
Asymmetric Justice: “Unintentional harms count, benefits don’t” or “Counting only harms destroys action” or similar.
Or one can notice that we are abstracting out a conclusion from someone else’s thing, or think about what we hope another will take away. Often but not always it’s the title. Constantly look to improve. Pain not unit of effort. Interacting with system creates blameworthiness. Default AI destroys all value. Claim bailey, retreat to motte. Society stuck in bad equilibrium. Etc.
I’ve found this valuable to keep in mind.