You Get About Five Words
Cross posted from the EA Forum.
Epistemic Status: all numbers are made up and/or sketchily sourced. Post errs on the side of simplistic poetry – take seriously but not literally.
If you want to coordinate with one person on a thing about something nuanced, you can spend as much time as you want talking to them – answering questions in realtime, addressing confusions as you notice them. You can trust them to go off and attempt complex tasks without as much oversight, and you can decide to change your collective plans quickly and nimbly.
You probably speak at around 100 words per minute. That’s 6,000 words per hour. If you talk for 3 hours a day, every workday for a year, you can communicate 4.3 million words worth of nuance.
You can have a real conversation with up to 4 people.
(Last year the small organization I work at considered hiring a 5th person. It turned out to be very costly and we decided to wait, and I think the reasons were related to this phenomenon)
If you want to coordinate on something nuanced with, say, 10 people, you realistically can ask them to read a couple books worth of words. A book is maybe 50,000 words, so you have maybe 200,000 words worth of nuance.
Alternately, you can monologue at people, scaling a conversation past the point where people realistically can ask questions. Either way, you need to hope that your books or your monologues happen to address the particular confusions your 10 teammates have.
If you want to coordinate with 100 people, you can ask them to read a few books, but chances are they won’t. They might all read a few books worth of stuff, but they won’t all have read the same books. The information that they can be coordinated around is more like “several blogposts.” If you’re trying to coordinate nerds, maybe those blogposts add up to one book because nerds like to read.
If you want to coordinate 1,000 people… you realistically get one blogpost, or maybe one blogpost worth of jargon that’s hopefully self-explanatory enough to be useful.
If you want to coordinate thousands of people...
You have about five words.
This has ramifications on how complicated a coordinated effort you can attempt.
What if you need all that nuance and to coordinate thousands of people? What would it look like if the world was filled with complicated problems that required lots of people to solve?
I guess it’d look like this one.
- Epistemic Legibility by Feb 9, 2022, 6:10 PM; 309 points) (
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by Sep 24, 2019, 4:12 AM; 307 points) (
- Covid 12/24: We’re F***ed, It’s Over by Dec 24, 2020, 3:10 PM; 276 points) (
- The 101 Space You Will Always Have With You by Nov 29, 2023, 4:56 AM; 268 points) (
- “Carefully Bootstrapped Alignment” is organizationally hard by Mar 17, 2023, 6:00 PM; 262 points) (
- Holly Elmore and Rob Miles dialogue on AI Safety Advocacy by Oct 20, 2023, 9:04 PM; 162 points) (
- Most People Don’t Realize We Have No Idea How Our AIs Work by Dec 21, 2023, 8:02 PM; 159 points) (
- Dan Luu on “You can only communicate one top priority” by Mar 18, 2023, 6:55 PM; 148 points) (
- Interpretability/Tool-ness/Alignment/Corrigibility are not Composable by Aug 8, 2022, 6:05 PM; 143 points) (
- The Relationship Between the Village and the Mission by May 12, 2019, 9:09 PM; 137 points) (
- Avoid Unnecessarily Political Examples by Jan 11, 2021, 5:41 AM; 106 points) (
- Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by Jun 29, 2020, 2:45 AM; 105 points) (
- Aim for conditional pauses by Sep 25, 2023, 1:05 AM; 100 points) (EA Forum;
- 2019 Review: Voting Results! by Feb 1, 2021, 3:10 AM; 99 points) (
- Taking Initial Viral Load Seriously by Apr 1, 2020, 10:50 AM; 94 points) (
- Partial summary of debate with Benquo and Jessicata [pt 1] by Aug 14, 2019, 8:02 PM; 89 points) (
- Epistemic Legibility by Mar 21, 2022, 7:18 PM; 76 points) (EA Forum;
- Covid 11/19: Don’t Do Stupid Things by Nov 19, 2020, 4:00 PM; 72 points) (
- Robust Agency for People and Organizations by Jul 19, 2019, 1:18 AM; 65 points) (
- Schelling Categories, and Simple Membership Tests by Aug 26, 2019, 2:43 AM; 59 points) (
- Covid 12/10: Vaccine Approval Day in America by Dec 10, 2020, 2:20 PM; 58 points) (
- Schelling points in the AGI policy space by Jun 26, 2024, 1:19 PM; 52 points) (
- Power vs Precision by Aug 16, 2021, 6:34 PM; 48 points) (
- Upgrading the AI Safety Community by Dec 16, 2023, 3:34 PM; 42 points) (
- Extrapolating from Five Words by Nov 15, 2023, 11:21 PM; 40 points) (
- You get one story detail by Apr 5, 2022, 4:38 AM; 37 points) (
- A Common-Sense Case For Mutually-Misaligned AGIs Allying Against Humans by Dec 17, 2023, 8:28 PM; 29 points) (
- Mar 16, 2022, 6:43 PM; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- Oct 29, 2021, 12:18 AM; 18 points) 's comment on Ruling Out Everything Else by (
- Oct 12, 2021, 4:02 PM; 18 points) 's comment on Blood Is Thicker Than Water 🐬 by (
- Thoughts on Moral Philosophy by Aug 17, 2021, 12:57 PM; 17 points) (
- Dec 8, 2019, 12:04 AM; 17 points) 's comment on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases by (
- May 29, 2019, 4:59 PM; 16 points) 's comment on Drowning children are rare by (
- Jan 30, 2021, 1:18 AM; 14 points) 's comment on The GameStop Situation: Simplified by (
- May 29, 2019, 5:06 AM; 14 points) 's comment on Drowning children are rare by (
- Mar 13, 2019, 3:47 PM; 13 points) 's comment on “AlphaStar: Mastering the Real-Time Strategy Game StarCraft II”, DeepMind [won 10 of 11 games against human pros] by (
- Feb 27, 2024, 9:25 PM; 11 points) 's comment on We Need Major, But Not Radical, FDA Reform by (
- Dec 21, 2021, 1:33 PM; 7 points) 's comment on High School Seniors React to 80k Advice by (EA Forum;
- Oct 26, 2023, 1:45 AM; 7 points) 's comment on Thoughts on responsible scaling policies and regulation by (
- Sep 24, 2024, 6:27 AM; 7 points) 's comment on What are the best arguments for/against AIs being “slightly ‘nice’”? by (
- Dec 9, 2020, 5:04 AM; 6 points) 's comment on alkjash’s Shortform by (
- Feb 1, 2023, 7:36 AM; 6 points) 's comment on Aiming for Convergence Is Like Discouraging Betting by (
- Jun 3, 2023, 1:21 PM; 5 points) 's comment on Interpretability/Tool-ness/Alignment/Corrigibility are not Composable by (
- Dec 30, 2020, 3:59 AM; 5 points) 's comment on Review Voting Thread by (
- Jul 29, 2023, 7:12 PM; 4 points) 's comment on Rationality !== Winning by (
- Apr 26, 2023, 4:08 PM; 4 points) 's comment on My Assessment of the Chinese AI Safety Community by (
- Sep 25, 2021, 6:52 PM; 4 points) 's comment on Shared Frames Are Capital Investments in Coordination by (
- Jul 16, 2023, 10:59 PM; 4 points) 's comment on A Hill of Validity in Defense of Meaning by (
- Jan 15, 2021, 1:03 AM; 4 points) 's comment on Covid 1/14: To Launch a Thousand Shipments by (
- Sep 2, 2021, 5:15 PM; 3 points) 's comment on What 2026 looks like by (
- Dec 13, 2024, 4:46 PM; 3 points) 's comment on Just one more exposure bro by (
- Dec 11, 2023, 10:10 AM; 3 points) 's comment on How LDT helps reduce the AI arms race by (
- Nov 29, 2023, 3:55 PM; 2 points) 's comment on Chapter 4 of The Precipice in poem form by (EA Forum;
- Jul 14, 2023, 10:47 PM; 2 points) 's comment on Alignment Megaprojects: You’re Not Even Trying to Have Ideas by (
- Dec 5, 2023, 9:57 PM; 2 points) 's comment on We’re all in this together by (
- Aug 1, 2019, 7:14 PM; 2 points) 's comment on shortform by (
- Jun 30, 2020, 5:32 PM; 2 points) 's comment on Optimized Propaganda with Bayesian Networks: Comment on “Articulating Lay Theories Through Graphical Models” by (
- May 7, 2024, 5:23 AM; 2 points) 's comment on Biorisk is an Unhelpful Analogy for AI Risk by (
- May 5, 2022, 6:24 PM; 2 points) 's comment on Narrative Syncing by (
- Aug 15, 2019, 9:27 PM; 2 points) 's comment on Raemon’s Shortform by (
- Sep 26, 2023, 9:52 AM; 1 point) 's comment on Far-Future Commitments as a Policy Consensus Strategy by (
- Is Kennedy a Nazi? by Jul 31, 2023, 8:51 AM; -12 points) (
I use this concept often, including explicitly thinking about what (about) five words I want to be the takeaway or that would deliver the payload, or that I expect to be the takeaway from something. I also think I’ve linked to it quite a few times.
I’ve also used it to remind people that what they are doing won’t work because they’re trying to communicate too much content through a medium that does not allow it.
A central problem is how to create building blocks that have a lot more than five words, but where the five words in each block can do a reasonable substitute job when needed.
As an additional data point, a link to this post will appear in the 12⁄10 Covid weekly roundup.
This is pretty cool. Can you give some example of about five word takeaways you’ve created for different contexts?
Here are some attempted takeaways for things I’ve written, some of which were explicit at the time, some of which were implicit:
Covid-19: “Outside, social distance, wear mask.”
Simulacra (for different posts/models): “Truth, lies, signals, strategic moves” or “level manipulates/dominates level below” or “abstractions dominate, then system collapses”
Mazes: “Modern large organizations are toxic” or “middle management destroys your soul”
Asymmetric Justice: “Unintentional harms count, benefits don’t” or “Counting only harms destroys action” or similar.
Or one can notice that we are abstracting out a conclusion from someone else’s thing, or think about what we hope another will take away. Often but not always it’s the title. Constantly look to improve. Pain not unit of effort. Interacting with system creates blameworthiness. Default AI destroys all value. Claim bailey, retreat to motte. Society stuck in bad equilibrium. Etc.
I’ve found this valuable to keep in mind.