Here is my take: since there’s so much AI content, it’s not really feasible to read all of it, so in practice I read almost none of it (and consequently visit LW less frequently).
The main issue I run into is that for most posts, on a brief skim it seems like basically a thing I have thought about before. Unlike academic papers, most LW posts do not cite previous related work nor explain how what they are talking about relates to this past work. As a result, if I start to skim a post and I think it’s talking about something I’ve seen before, I have no easy way of telling if they’re (1) aware of this fact and have something new to say, (2) aware of this fact but trying to provide a better exposition, or (3) unaware of this fact and reinventing the wheel. Since I can’t tell, I normally just bounce off.
I think a solution could be to have a stronger norm that posts about AI should say, and cite, what they are building on and how it relates / what is new. This would decrease the amount of content while improving its quality, and also make it easier to choose what to read. I view this as a win-win-win.
Tangentially, “visiting LW less frequently” is not necessarily a bad thing. We are not in the business of selling ads; we do not need to maximize the time users spend here. Perhaps it would be better if people spent less time online (including on LW) and more time doing whatever meaningful things they might do otherwise.
But I agree that even assuming this, “the front page is full of things I do not care about” is a bad way to achieve it.
tools for citation to the existing corpus of lesswrong posts and to off-site scientific papers would be amazing; eg, rolling search for related academic papers as you type your comment via the semanticscholar api, combined with search over lesswrong for all proper nouns in your comment. or something. I have a lot of stuff I want to say that I expect and intend is mostly reference to citations, but formatting the citations for use on lesswrong is a chore, and I suspect that most folks here don’t skim as many papers as I do. (that said, folks like yourself could probably give people like me lessons on how to read papers.)
also very cool would be tools for linting emotional tone. I remember running across a user study that used a large language model to encourage less toxic review comments; I believe it was in fact an intervention study to see how usable a system was. looking for that now...
Over the years I’ve thought about a “LessWrong/Alignment” journal article format the way regular papers have Abstract-Intro-Methods-Results-Discussion. Something like that, but tailored to our needs, maybe also bringing in OpenPhil-style reasoning transparency (but doing a better job of communicating models).
Such a format could possibly mandate what you’re wanting here.
I think it’s tricky. You have to believe any such format actually makes posts better rather than constraining them, and it’s worth the effort of writers to confirm.
It is something I’d like to experiment with though.
Here is my take: since there’s so much AI content, it’s not really feasible to read all of it, so in practice I read almost none of it (and consequently visit LW less frequently).
The main issue I run into is that for most posts, on a brief skim it seems like basically a thing I have thought about before. Unlike academic papers, most LW posts do not cite previous related work nor explain how what they are talking about relates to this past work. As a result, if I start to skim a post and I think it’s talking about something I’ve seen before, I have no easy way of telling if they’re (1) aware of this fact and have something new to say, (2) aware of this fact but trying to provide a better exposition, or (3) unaware of this fact and reinventing the wheel. Since I can’t tell, I normally just bounce off.
I think a solution could be to have a stronger norm that posts about AI should say, and cite, what they are building on and how it relates / what is new. This would decrease the amount of content while improving its quality, and also make it easier to choose what to read. I view this as a win-win-win.
Tangentially, “visiting LW less frequently” is not necessarily a bad thing. We are not in the business of selling ads; we do not need to maximize the time users spend here. Perhaps it would be better if people spent less time online (including on LW) and more time doing whatever meaningful things they might do otherwise.
But I agree that even assuming this, “the front page is full of things I do not care about” is a bad way to achieve it.
tools for citation to the existing corpus of lesswrong posts and to off-site scientific papers would be amazing; eg, rolling search for related academic papers as you type your comment via the semanticscholar api, combined with search over lesswrong for all proper nouns in your comment. or something. I have a lot of stuff I want to say that I expect and intend is mostly reference to citations, but formatting the citations for use on lesswrong is a chore, and I suspect that most folks here don’t skim as many papers as I do. (that said, folks like yourself could probably give people like me lessons on how to read papers.)
also very cool would be tools for linting emotional tone. I remember running across a user study that used a large language model to encourage less toxic review comments; I believe it was in fact an intervention study to see how usable a system was. looking for that now...
Maybe GPT-3 could be used to find LW content related to the new post, using something like this: https://gpt-index.readthedocs.io
Unfortunately, I didn’t get around to doing anything with it yet. But it seems useful: https://twitter.com/s_jobs6/status/1619063620104761344
Over the years I’ve thought about a “LessWrong/Alignment” journal article format the way regular papers have Abstract-Intro-Methods-Results-Discussion. Something like that, but tailored to our needs, maybe also bringing in OpenPhil-style reasoning transparency (but doing a better job of communicating models).
Such a format could possibly mandate what you’re wanting here.
I think it’s tricky. You have to believe any such format actually makes posts better rather than constraining them, and it’s worth the effort of writers to confirm.
It is something I’d like to experiment with though.