Should Rational Animations invite viewers to read content on LessWrong?
When I introduced Rational Animations, I wrote:
What I won’t do is aggressively advertise LessWrong and the EA Forum. If the channel succeeds, I will organize fundraisers for EA charities. If I adapt an article for YT, I will link it in the description or just credit the author. If I use quotes from an author on LW or the EA Forum, I will probably credit them on-screen. But I will never say: “Come on LW! Plenty of cool people there!” especially if the channel becomes big. Otherwise, “plenty of cool people” becomes Reddit pretty fast.
If the channel becomes big, I will also refrain from posting direct links to LW and the EA Forum. Remind me if I ever forget. And let me know if these rules are not conservative enough.
In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had an easy way to fall into an “intellectual rabbit hole” to learn more. So a potential way to increase Rational Animations’ impact is to increase the number of calls to action to read the sources of our videos. LessWrong and the EA Forum are good candidates for people to go learn more. Suppose we adapt an article from the sequences. One option is to link to readthesequences.com, another is to link to the version of the article hosted on LessWrong. It seems to me that linking to the version hosted on LessWrong would dramatically increase the chance of people falling into an “intellectual rabbit hole” and learning a lot more and eventually starting to contribute.
Here are what I think are the main cons and pros of explicitly inviting people to read stuff on LessWrong:
Cons:
If lots of people join at once we may get a brief period of lower-quality posts and comments until moderation catches up.
If we can’t onboard people fast enough, the quality of what’s in the site will become a lot lower over time, and people producing excellent content will be driven away.
Pro:
If people read LessWrong, they will engage with important topics such as reducing existential risk from AI. Eventually, some of them might be able to contribute in important ways, such as by doing alignment research.
I’m more optimistic now about trying to invite people than I was in 2021, mainly thanks to recent adjustments in moderation policy and solutions such as the Rejected Content Section, which, in my understanding, are meant at least in part for dealing with the large influx of new users resulting from the increased publicity around AGI x-risk. I think it likely (~ 70%) that such moderation norms are strong enough filters that they would select for good contributions and for users that can contribute in positive ways.
Among the cons, I think number 2 is worse and more likely to happen without a strong and deliberate moderation effort. That said, it’s probably relatively easy and safe to run experiments.
I don’t think many people would make a new account as a result of an isolated call to action to read an article hosted on LessWrong. I have high confidence that the number of new accounts would be between 100 and 1000 per million views, given the three conditions in this market. You should treat the market as providing a per-video upper bound, since the conditions describe some very aggressive publicity.
Market conditions:
In a video, Rational Animations invites viewers to read a LessWrong article or sequence. It’s an explicit invitation made in the narration of the video rather than, e.g., only as in-video text.
The article/sequence is linked in three places: at the end of the video (e.g., in place or near the Patreon link), at the top of the video description, and in the pinned comment.
The video accrues 1 million views.
So, what should I do? I’m asking moderators and users too. How liberal should I be with inviting people here? Can/should I do some experiments? If I don’t get a clear and united “go ahead” from the community and moderators I won’t take any strong unilateral action. I feel like there needs to be a strong majority for me to proceed.
I will weigh particularly highly the opinions of moderators and long-time users.
I’m particularly interested in initially discussing the idea of starting small-scale and experimenting with tools such as links tracking new accounts.
I think lesswrong would need a beginner/application section to deal with something like this. I’d also suggest doing it at the end of a long and very detailed single video with intentionally lower production value and much higher math content, more like a manim video than a mainline youtube video. That will produce more natural filtering; in order to get into the technical side of youtube you need to speedwatch long, highly technical videos, and skim the beginnings of many long technical videos when the recommender gives them to you, while also being disciplined about not watching nontechnical stuff. people who are using that pattern of videowatching are the ones who should almost certainly get encouraged to come visit lesswrong.
Upvoted for this idea.
When new users pop up in the moderation dashboard (which happens when they make their first post/comment), we see the HTTP referrer they had the first time they landed on the site. (At domain granularity, not page-level granularity, and only for users who clicked a link not users who typed the URL into the address bar). So, if we get a bunch of users coming from YouTube leaving bad comments that would make the site worse, we do have the ability to notice that’s what happening.
That said, I do think that there’s a real risk here. Among all the places on the internet, YouTube is unique in that it isn’t filtering for people who read a lot, so it may attract a crowd that’s both much larger and much less intellectually sophisticated than LW is used to. On the other hand, maybe the subset of people who wouldn’t fit on LW wind up bouncing off the walls of text anyways. I could see it going either way.
Just my opinion, not based on any data...
Good:
including a link to a relevant LW article in the description of the video
Bad:
link to LW homepage or the Sequences
link directly in the video, or the video explicitly encouraging people to click a link
The intuition is that people who read the video descriptions are more likely the kind of people we want here than those who only watch videos; and that discussions of specific topics under specific articles are more likely to be productive than just “here is a rationality website, do something”.
First things first, I’m pro experiments so would be down to experiment with stuff in this area.
Beyond that, seems to depend on a couple of things:
The details of what inviting viewers to LW would look like.
What the LessWrong team thinks is the best use of our time.
The Details Matter
LessWrong currently has about 2,000 logged in users per day. And to 20-100 new users each day (comparing the wide range including peaks recently). If the numbers of viewers wouldn’t change that much, perhaps +10%, it wouldn’t be a big deal. On the other hand, if Rational Animations was wildly successful and driving several hundred people to create LW accounts a day (with flow through to posts/comments), that’d be a big deal. That would require work to handle.
Another big question is the level of sophistication of users. If you’re sending people to engage with more advanced stuff vs more intro content, that’s a big difference in how I expect them to relate to the site.
Should LessWrong be providing intro material and answers?
Even before you posted this, I’d been thinking about this question. I think it terms of knowledge and skill, we’re well positioned to provide a web resource of intro Alignment stuff. It’s not quite the core thing of “create a thriving intellectual community that can make progress on the important hard problems”, but might be a worthwhile enough opportunity that we should put time into it. I’d already been planning to do some work on our wiki system with an eye towards having more intro AI content for a broader audience.
So maybe. Maybe we should leaning into this as an opportunity even though it’ll take work of both not letting it affect the site in bad ways (moderation, etc). and also possibly preparing better material for a broader audience.
Moderation Costs
To provide some insight: on the margin, more new users means more work for us. We process all first time posters/commenters manually, so there’s a linear factor there, and of new users, some require follow-up and then moderation action. So currently, there’s human cost in adding more people.
It might be possible to build tech to reduce the marginal cost of new users, but that itself is upfront work. Our recent moderation sprint cut down moderation effort a lot, but that was 3-4 weeks worth of work, and I think getting further gains would take at least similar time.
The other thing is it’s one thing to have a system under mild pressure vs intense pressure. With the later, I think we’d find we have to do lots of things to plug the gap.
There’s a question of whether huge influxes will happen involuntarily (somewhat my expectation), and in a world where we’ve built the tech to handle them, for sure also we should invite Rational Animations viewers.
It will be a while before we run an experiment, and when I’d like to start one, I’ll make another post and consult with you again.
When/if we do one, it’ll probably look like what @the gears to ascension proposed in their comment here: a pretty technical video that will likely get a smaller number of views than usual and filters for the kind of people we want on LessWrong. How I would advertise it could resemble the description of the market on Manifold linked in the post, but I’m going to run the details to you first.
This provides important context. 20-100 new accounts per day is a lot. At the moment, Manifold predicts that as a result of a strong call to action and 1M views, Rational Animations would be able to bring 679 new expected users. That would probably look like getting 300-400 more users in the first couple weeks the video is out and an additional 300-400 in the following few months. That’s not a lot!
As a simplification, suppose the video gets 200k views during the first day. That would correspond to about 679⁄5 = 136 new expected users. Suppose on the second day we get 100k more views. That would be about 70 more users. Then suppose, simplifying, that the remaining 200k views are equally distributed over the remaining 12 days. That would correspond to merely 11 additional users per day.
I would be happy to link such things if you produce them. For now, linking the AI Safety Fundamentals courses should achieve ~ the same results. Some of the readings can be found on LessWrong too so people may discover LW as a result too. That said, having something produced by LW probably improves the funnel.
Duly noted. Another interesting datum would be to know the fraction of new users that become active posters and how long they take to do that.
Some rough thoughts:
I think it’s kind of inevitable that LessWrong will eventually get a huge number of people attempting to join. I do think we’ll need to deal with that somehow or other sooner or later. I don’t think we’re ready yet. I think it’s possible for us to become ready if we prioritize it.
We’ve been thinking about this a lot lately. In addition to the rejected section we also recently shipped AutoRateLimits for low and negative karma users. I’ll have a post about this soon, but the basic gist is that users start with a rate limit of 3 comments per day and 2 posts per week. That rate limit disappears once they hit 5 karma. If their karma becomes negative, this limit is made more strict. (At −1 karma, it becomes 1 post per week and 1 comment per day. At −15 karma they can only write 1 comment every 3 days. At −30 karma they can only submit 1 post every 2 weeks)
I don’t think this is enough to handle a really large influx. Even if they’re heavily rate limited, a bunch of people posting a mediocre comment every 3 days would add up to a significant drop in the site signal noise ratio. You could increase the rate limit but that does make things harder for new users and would make the site feel more punishing. Karma is only a rough measure of quality and there’s a lot of room for disagreement over whether a given downvote is fair.
In the pre-GPT world I’d be more optimistic about making some kind of test that checks for whether a user has a reasonable understanding of what LessWrong is about. and is able to participate. In the post GPT world it’s less clear how to do that sort of thing – any kind of automated test is basically a test for “do they know how to use an LLM?”.
There are options like “Let established users approve new users.” I’d set my bar fairly high for established users to avoid a situation where each generation of users lets in a somewhat weaker set of users. The kind of bar I’d feel safe with for users-with-approval-power is something like “They’ve gotten 2-3 posts highly uploaded in the LessWrong Annual Review”.
That all said, I’m not actually sure how big an influx we’re likely to be talking about from your video. Would it be larger than the TIME article?
You could have AutoAutoRateLimits. That is, you have some target such as “total number of posts/comments per time” or “total number of posts/comments per time from users with <15 karma” or something. From that, you automatically adjust the rate limits to keep the target below a global target level. Maybe you add complications, such as a floor to never let the rate limit go to None, and maybe you have inertia. (There’s plausibly bad effects to this though, IDK.)
I’ve seen this and will write up some thoughts / start participating in conversation in the next day or two.