A productive thing to do here would be to try to reconcile the claim that a large number of people can’t reasonably be expected to read more than a few words, and the claim that something like EA or Rationalism is possible at anything like the current scale. These are in obvious tension.
Another claim to reconcile with yours would be a claim that there’s anything like law going on, or really anything other than gang warfare.
My claim is “a large number of people can’t reasonably be expected to read more than a few words in common”, which I think is subtly different (in addition to the thing where this post wasn’t about ways to address the problem, it was about the default state of the problem in the absence of an explicit coordination mechanism)
If your book-length-treatise reaches 1000 people, probably 10-50 of those people read the book and paid careful attention, 100 people read the book, a couple hundred people skimmed the book, and the rest just absorbed a few key points secondhand.
I think it is in fact a failure of law that that the law has grown to the point where a single person can’t possibly know it all, and only specialists can know most of it (because this creates an environment where most people don’t know what laws they’re breaking which enables certain kinds of abuse)
I think the way EA and LessWrong work is that there’s a large body of work people are vaguely expected to read (In the case of LessWrong, I think the core sequences are around [edit: a million words, I initially was using my cached pageCount rather than wordCount] not sure how big the overall EA corpus is). EA and LW are filtered by “nerds who like to read”, so you get to be on the higher end of the spectrum of how many people have read how much.
But, it still seems like a few things end up happening:
Important essays definitely lose nuance. “Politics in the Mind Killer” is one of the common examples of something where the original essay got game-of-telephoned pretty hard by oral culture.
Similarly, EA empirically runs into messaging issues where, even though 80k had tried intentionally to downplay the “Earning to Give” recommendation, but people still primarily associated 80k with Earning to Give years later. And when they finally successfully switched the message to “EA is talent constrained”, that got misconstrued as well.
Empirically people also successfully rely on a common culture to some degree. My sense is that the people who tend to do serious work and get jobs and stick around are ones who have read a lot of at least a good chunk of the words, and they somewhat filter themselves into groups that have read particular subsets. The fact that there are 1000+ people misunderstanding politics is the mind killer doesn’t mean there’s not also 100-200 people who remember the original claim.
(There are probably different clusters of people who have read different clusters of words, i.e. people who have read the sequences, people who have read Doing Good Better, people who have read a smattering of essays from each as well as the old Givewell blogs, etc)
One problem facing EA is that there is not much coordination on which words are the right ones to read. Doing Good Better was written with a goal of being “the thing you gave people as their cultural onboarding tool”, AFAICT. But which 80k essays are you supposed to have read? All of them? I dunno, that’s a lot and I certainly haven’t, and it’s not obvious that that’s a better use of my time than reading up on machine learning or the AI Alignment Forum or going off to learn new things that aren’t part of the core community material.
In the case of LessWrong, I think the core sequences are around 10,000 words, not sure how big the overall EA corpus is.
This feel like a 100x underestimate; The Sequences clocks in at over a million words, I believe, and it’s not the case that only 1% of the words are core.
(The mental-action I was performing was “observing what seems to actually happen and then grab the numbers that I remembered coinciding with those actions”, rather than working backwards from a model of numbers, which may or may not have been a good procedure, but in any case means that being off by a factor of 100 doesn’t influence the surrounding text much)
A productive thing to do here would be to try to reconcile the claim that a large number of people can’t reasonably be expected to read more than a few words, and the claim that something like EA or Rationalism is possible at anything like the current scale. These are in obvious tension.
Another claim to reconcile with yours would be a claim that there’s anything like law going on, or really anything other than gang warfare.
My claim is “a large number of people can’t reasonably be expected to read more than a few words in common”, which I think is subtly different (in addition to the thing where this post wasn’t about ways to address the problem, it was about the default state of the problem in the absence of an explicit coordination mechanism)
If your book-length-treatise reaches 1000 people, probably 10-50 of those people read the book and paid careful attention, 100 people read the book, a couple hundred people skimmed the book, and the rest just absorbed a few key points secondhand.
I think it is in fact a failure of law that that the law has grown to the point where a single person can’t possibly know it all, and only specialists can know most of it (because this creates an environment where most people don’t know what laws they’re breaking which enables certain kinds of abuse)
I think the way EA and LessWrong work is that there’s a large body of work people are vaguely expected to read (In the case of LessWrong, I think the core sequences are around [edit: a million words, I initially was using my cached pageCount rather than wordCount] not sure how big the overall EA corpus is). EA and LW are filtered by “nerds who like to read”, so you get to be on the higher end of the spectrum of how many people have read how much.
But, it still seems like a few things end up happening:
Important essays definitely lose nuance. “Politics in the Mind Killer” is one of the common examples of something where the original essay got game-of-telephoned pretty hard by oral culture.
Similarly, EA empirically runs into messaging issues where, even though 80k had tried intentionally to downplay the “Earning to Give” recommendation, but people still primarily associated 80k with Earning to Give years later. And when they finally successfully switched the message to “EA is talent constrained”, that got misconstrued as well.
Empirically people also successfully rely on a common culture to some degree. My sense is that the people who tend to do serious work and get jobs and stick around are ones who have read a lot of at least a good chunk of the words, and they somewhat filter themselves into groups that have read particular subsets. The fact that there are 1000+ people misunderstanding politics is the mind killer doesn’t mean there’s not also 100-200 people who remember the original claim.
(There are probably different clusters of people who have read different clusters of words, i.e. people who have read the sequences, people who have read Doing Good Better, people who have read a smattering of essays from each as well as the old Givewell blogs, etc)
One problem facing EA is that there is not much coordination on which words are the right ones to read. Doing Good Better was written with a goal of being “the thing you gave people as their cultural onboarding tool”, AFAICT. But which 80k essays are you supposed to have read? All of them? I dunno, that’s a lot and I certainly haven’t, and it’s not obvious that that’s a better use of my time than reading up on machine learning or the AI Alignment Forum or going off to learn new things that aren’t part of the core community material.
This feel like a 100x underestimate; The Sequences clocks in at over a million words, I believe, and it’s not the case that only 1% of the words are core.
Whoops. I was confusing pages with words.
(The mental-action I was performing was “observing what seems to actually happen and then grab the numbers that I remembered coinciding with those actions”, rather than working backwards from a model of numbers, which may or may not have been a good procedure, but in any case means that being off by a factor of 100 doesn’t influence the surrounding text much)