tl;dr: the amount of people who could write sensible arguments is small, they would probably still be vastly outnumbered, and it makes more sense to focus on actually trying to talk to people who might have an impact
EDIT: my arguments mostly apply to “become a twitter micro-blogger” strat, but not to the “reply guy” strat that jacob seems to be arguing for
as someone who has historically wrote multiple tweets that were seen by the majority of “AI Twitter”, I think I’m not that optimistic about the “let’s just write sensible arguments on twitter” strategy
for context, here’s my current mental model of the different “twitter spheres” surrounding AI twitter: - ML Research twitter: academics, or OAI / GDM / Anthropic announcing a paper and everyone talks about it - (SF) Tech Twitter: tweets about startup, VCs, YC, etc. - EA folks: a lot of ingroup EA chat, highly connected graph, veneration of QALY the lightbulb and mealreplacer - tpot crew: This Part Of Twitter, used to be post-rats i reckon, now growing bigger with vibecamp events, and also they have this policy of always liking before replying which amplifies their reach - Pause AI crew: folks with pause (or stop) emojis, who will often comment on bad behavior from labs building AGI, quoting (eg with clips) what some particular person say, or comment on eg sam altman’s tweets - AI Safety discourse: some people who do safety research, will mostly happen in response to a top AI lab announcing some safety research, or to comment on some otherwise big release. probably a subset of ML research twitter at this point, intersects with EA folks a lot - AI policy / governance tweets: comment on current regulations being passed (like EU AI act, SB 1047), though often replying / quote-tweeting Tech Twitter - the e/accs: somehow connected to tech twitter, but mostly anonymous accounts with more extreme views. dunk a lot on EAs & safety / governance people
I’ve been following these groups somehow evolve since 2017, and maybe the biggest recent changes have been how much tpot (started circa 2020 i reckon) and e/acc (who have grown a lot with twitter spaces / mainstream coverage) accounts have grown in the past 2 years. i’d say that in comparison the ea / policy / pause folks have also started to post more but there accounts are quite small compared to the rest and it just still stays contained in the same EA-adjacent bubble
I do agree to some extent with Nate Showell’s comment saying that the reward mechanisms don’t incentivize high-quality thinking. I think that if you naturally enjoy writing longform stuff in order to crystallize thinking, then posting with the intent of getting feedback on your thinking as some form of micro-blogging (which you would be doing anyway) could be good, and in that sense if everyone starts doing that this could shift the quality of discourse by a small bit.
To give some example on the reward mechanisms stuff, my last two tweets have been 1) some diagram I made trying to formalize what are the main cruxes that would make you want to have the US start a manhattan project 2) some green text format hyperbolic biography of leopold (who wrote the situational awareness series on ai and was recently on dwarkesh)
both took me the same amount of time to make (30 minutes to 1h), but the diagram got 20k impressions, whereas the green text format got 2M (so 100x more), and I think this is because of a) many more tech people are interested in current discourse stuff than infographics b) tech people don’t agree with the regulation stuff c) in general, entertainement is more widely shared than informative stuff
so here are some consequences of what I expect to happen if lesswrong folks start to post more on x: - 1. they’re initially not going to reach a lot of people - 2. it’s going to be some ingroup chat with other EA folks / safety / pause / governance folks - 3. they’re still going to be outnumbered by a large amount of people who are explicitly anti-EA/rationalists - 4. they’re going to waste time tweeting / checking notifications - 5. the reward structure is such that if you have never posted on X before, or don’t have a lot of people who know you, then long-form tweets will perform worse than dunks / talking about current events / entertainement - 6. they’ll reach an asymptote given that the lesswrong crowd is still much smaller than the overal tech twitter crowd
to be clear, I agree that the current discourse quality is pretty low and I’d love to see more of it, my main claims are that: - i. the time it would take to actually shift discourse meaningfully is much longer than how many years we actually have - ii. current incentives & the current partition of twitter communities make it very adversarial - iii. other communities are aligned with twitter incentives (eg. e/accs dunking, tpots liking everything) which implies that even if lesswrong people tried to shape discourse the twitter algorithm would not prioritize their (genuine, truth-seeking) tweets - iv. twitter’s reward system won’t promote rational thinking and lead to spending more (unproductive) time on twitter overall.
all of the above points make it unlikely that (on average) the contribution of lw people to AI discourse will be worth all of the tradeoffs that comes with posting more on twitter
EDIT: in case we’re talking about main posts, but I could see why posting replies debunking tweets or community notes could work
the amount of people who could write sensible arguments is small
Disagree. The quality of arguments that need debunking is often way below the average LW:ers intellectual pay grade. And there’s actually quite a lot of us.
ok I meant something like “people would could reach a lot of people (eg. roon’s level, or even 10x less people than that) from tweeting only sensible arguments is small”
but I guess that don’t invalidate what you’re suggesting. if I understand correctly, you’d want LWers to just create a twitter account and debunk arguments by posting comments & occasionally doing community notes
that’s a reasonable strategy, though the medium effort version would still require like 100 people spending sometimes 30 minutes writing good comments (let’s say 10 minutes a day on average). I agree that this could make a difference.
I guess the sheer volume of bad takes or people who like / retweet bad takes is such that even in the positive case that you get like 100 people who commit to debunking arguments, this would maybe add 10 comments to the most viral tweets (that get 100 comments, so 10%), and maybe 1-2 comments for the less popular tweets (but there’s many more of them)
I think it’s worth trying, and maybe there are some snowball / long-term effects to take into account. it’s worth highlighting the cost of doing so as well (16h or productivity a day for 100 people doing it for 10m a day, at least, given there are extra costs to just opening the app). it’s also worth highlighting that most people who would click on bad takes would already be polarized and i’m not sure if they would change their minds of good arguments (and instead would probably just reply negatively, because the true rejection is more something about political orientations, prior about AI risk, or things like that)
but again, worth trying, especially the low efforts versions
tl;dr: the amount of people who could write sensible arguments is small, they would probably still be vastly outnumbered, and it makes more sense to focus on actually trying to talk to people who might have an impact
EDIT: my arguments mostly apply to “become a twitter micro-blogger” strat, but not to the “reply guy” strat that jacob seems to be arguing for
as someone who has historically wrote multiple tweets that were seen by the majority of “AI Twitter”, I think I’m not that optimistic about the “let’s just write sensible arguments on twitter” strategy
for context, here’s my current mental model of the different “twitter spheres” surrounding AI twitter:
- ML Research twitter: academics, or OAI / GDM / Anthropic announcing a paper and everyone talks about it
- (SF) Tech Twitter: tweets about startup, VCs, YC, etc.
- EA folks: a lot of ingroup EA chat, highly connected graph, veneration of QALY the lightbulb and mealreplacer
- tpot crew: This Part Of Twitter, used to be post-rats i reckon, now growing bigger with vibecamp events, and also they have this policy of always liking before replying which amplifies their reach
- Pause AI crew: folks with pause (or stop) emojis, who will often comment on bad behavior from labs building AGI, quoting (eg with clips) what some particular person say, or comment on eg sam altman’s tweets
- AI Safety discourse: some people who do safety research, will mostly happen in response to a top AI lab announcing some safety research, or to comment on some otherwise big release. probably a subset of ML research twitter at this point, intersects with EA folks a lot
- AI policy / governance tweets: comment on current regulations being passed (like EU AI act, SB 1047), though often replying / quote-tweeting Tech Twitter
- the e/accs: somehow connected to tech twitter, but mostly anonymous accounts with more extreme views. dunk a lot on EAs & safety / governance people
I’ve been following these groups somehow evolve since 2017, and maybe the biggest recent changes have been how much tpot (started circa 2020 i reckon) and e/acc (who have grown a lot with twitter spaces / mainstream coverage) accounts have grown in the past 2 years. i’d say that in comparison the ea / policy / pause folks have also started to post more but there accounts are quite small compared to the rest and it just still stays contained in the same EA-adjacent bubble
I do agree to some extent with Nate Showell’s comment saying that the reward mechanisms don’t incentivize high-quality thinking. I think that if you naturally enjoy writing longform stuff in order to crystallize thinking, then posting with the intent of getting feedback on your thinking as some form of micro-blogging (which you would be doing anyway) could be good, and in that sense if everyone starts doing that this could shift the quality of discourse by a small bit.
To give some example on the reward mechanisms stuff, my last two tweets have been 1) some diagram I made trying to formalize what are the main cruxes that would make you want to have the US start a manhattan project 2) some green text format hyperbolic biography of leopold (who wrote the situational awareness series on ai and was recently on dwarkesh)
both took me the same amount of time to make (30 minutes to 1h), but the diagram got 20k impressions, whereas the green text format got 2M (so 100x more), and I think this is because of a) many more tech people are interested in current discourse stuff than infographics b) tech people don’t agree with the regulation stuff c) in general, entertainement is more widely shared than informative stuff
so here are some consequences of what I expect to happen if lesswrong folks start to post more on x:
- 1. they’re initially not going to reach a lot of people
- 2. it’s going to be some ingroup chat with other EA folks / safety / pause / governance folks
- 3. they’re still going to be outnumbered by a large amount of people who are explicitly anti-EA/rationalists
- 4. they’re going to waste time tweeting / checking notifications
- 5. the reward structure is such that if you have never posted on X before, or don’t have a lot of people who know you, then long-form tweets will perform worse than dunks / talking about current events / entertainement
- 6. they’ll reach an asymptote given that the lesswrong crowd is still much smaller than the overal tech twitter crowd
to be clear, I agree that the current discourse quality is pretty low and I’d love to see more of it, my main claims are that:
- i. the time it would take to actually shift discourse meaningfully is much longer than how many years we actually have
- ii. current incentives & the current partition of twitter communities make it very adversarial
- iii. other communities are aligned with twitter incentives (eg. e/accs dunking, tpots liking everything) which implies that even if lesswrong people tried to shape discourse the twitter algorithm would not prioritize their (genuine, truth-seeking) tweets
- iv. twitter’s reward system won’t promote rational thinking and lead to spending more (unproductive) time on twitter overall.
all of the above points make it unlikely that (on average) the contribution of lw people to AI discourse will be worth all of the tradeoffs that comes with posting more on twitter
EDIT: in case we’re talking about main posts, but I could see why posting replies debunking tweets or community notes could work
(Also the arguments of this comment do not apply to Community Notes.)
Disagree. The quality of arguments that need debunking is often way below the average LW:ers intellectual pay grade. And there’s actually quite a lot of us.
ok I meant something like “people would could reach a lot of people (eg. roon’s level, or even 10x less people than that) from tweeting only sensible arguments is small”
but I guess that don’t invalidate what you’re suggesting. if I understand correctly, you’d want LWers to just create a twitter account and debunk arguments by posting comments & occasionally doing community notes
that’s a reasonable strategy, though the medium effort version would still require like 100 people spending sometimes 30 minutes writing good comments (let’s say 10 minutes a day on average). I agree that this could make a difference.
I guess the sheer volume of bad takes or people who like / retweet bad takes is such that even in the positive case that you get like 100 people who commit to debunking arguments, this would maybe add 10 comments to the most viral tweets (that get 100 comments, so 10%), and maybe 1-2 comments for the less popular tweets (but there’s many more of them)
I think it’s worth trying, and maybe there are some snowball / long-term effects to take into account. it’s worth highlighting the cost of doing so as well (16h or productivity a day for 100 people doing it for 10m a day, at least, given there are extra costs to just opening the app). it’s also worth highlighting that most people who would click on bad takes would already be polarized and i’m not sure if they would change their minds of good arguments (and instead would probably just reply negatively, because the true rejection is more something about political orientations, prior about AI risk, or things like that)
but again, worth trying, especially the low efforts versions