Reposting old posts. Reposting classic LW posts among the new posts (either in the discussion section or on the main page) can make them more accessible and can serve as a focal point for discussion. It can help introduce new readers to important background material, fill in some gaps for people who have been around LW for awhile but haven’t read everything, and refresh the minds of oldtimers who read the post long ago. The sequences are an important part of LW, and along with frequent suggestions to read the sequences and other efforts, reposting can help get people to read them.
One option: repost individual posts. It could be one of your favorites, or any old post that you consider relevant or want to discuss. Just make a post in the discussion section with a link to the old post plus an explanation of why you reposted it. The explanation could just be a sentence saying that it’s great post about X, or you could give a longer commentary on the post. Further discussion can take place in the comments to your post.
Option two: Cycle through the sequences. Start with the first post in the first sequence and repost one post per day in the discussion section. Each repost would contain a link to the original post, a link to the previous & subsequent posts in the sequence, and a one paragraph summary of the post (which could be copied directly from the wiki). Any additional comments or discussion would take place in the comments to the new post.
We could do either one of these options or both of them, or try another variation on the idea of reposting old posts. If we pin down some details, like a standardized format for the title & tags to indicate that it’s a repost, we could turn this into a convention that’s helpful for people who want to reread classic posts and easy to ignore for people who don’t.
Decent idea that’s been brought up various times in different contexts. I think doing this right would mean not actually copying the posts, but rather rexposing them (rss feed and being on top on some page). The old comments are still useful and having multiple copies of the same data floating around is always a bad idea.
By “repost” I didn’t mean that we would copy the old post. I was thinking that someone would make a new post which contains a link to the old post and a paragraph or so about the old post (either a summary or an explanation of why it’s worth reading).
There are other ways to give old posts more exposure, but this suggestion is a simple, low-tech approach which we could start doing right now if we wanted to. We just need to decide if we want to do it, and agree on conventions for those posts (title, tags, how often, whether comments should go on the original post or the new post, etc.).
Hah, this actually reminds me of the tradition in Eastern Orthodox churches, where they will read a passage out of the New Testament every day and then discuss it (usually a one way discussion, i.e. preaching), thus reading the whole book during the year.
We could split the sequences in a similar fashion where we can cycle through them in a span of about a year.
I’ve been thinking for a long time that it would be cool to make an RSS drip feed tool that would just be a script that, given a start date, returns you a retro-feed of the original LW posting sequence, so that somebody could subscribe and get walked through all the posts in their original order, using their original spacing.
Alternately, an email autoresponder that does the summary+link thing. Some of the fancier systems even let you notice that someone has clicked on something, and then add additional materials to the sequence, so that e.g. clicking on a QM post causes you to be offered more QM posts, and so on.
The big advantage to the RSS option, though, is that it’s relatively low-tech and low processing overhead—you could serve it off a single, simple database query.
We could probably repurpose some of the Archive Binge software, although it might need work to reproduce the ‘original spacing’. (Not that I’m convinced that’s very useful. 1 every X days sounds better to me.)
it might need work to reproduce the ‘original spacing’
Actually, it’s easier to keep the original spacing, because then all you need is a database of the posts and their original dates, and some very simple math to do the query. To do “1 every X days” means you have to fake the dates, use serial numbers, or some other such rubbish in order to find which items to put in the feed.
I have a list of postings sans dates. Every X days cron runs and the head of the list is popped off into the RSS feed.
I have a list of postings with dates. Whenever somebody tries to read an RSS feed, I return the entries within the appropriate time window.
IOW, my approach doesn’t store any server-side state. All the state is in the feed URL (specifying the start date). The query is something like:
SELECT (original_post_date-first_post_date+feed_url_date), title, etc.
FROM posts
WHERE original_post_date<(now()-feed_url_date+first_post_date)
ORDER BY original_post_date DESC
LIMIT size_of_feed -- a constant, like 20
Et voila. No cron. No “list”. No “feed” to have things “popped into”. If ten thousand people subscribe, there is no additional data added to a database or written to disk anywhere. And since the database is read-only, you can replicate and load-balance the service to your heart’s content.
In addition, my approach can be trivially extended to use an etag or a last-modified date that contains the date of the next post, and then avoid doing the query at all if that date hasn’t been reached yet. (Most RSS clients support sending back an ETag or If-modified-since header containing the information from the last query, so that they can skip reparsing—and this would allow the system to simply say, “nah, nothing’s changed” and not re-run the query.)
And it’s still scalable via replication—you can have as many clones running as you want, and they’ll all answer the same thing about the given feed URL (within the accuracy of their clock synchronization, of course).
Et voila.
Actually, this approach is so simple that you don’t even need a real SQL database—Google App Engine’s simple database API would suffice. Heck, the “database” itself is probably small enough to be embedded entirely within the source code, if you did a titles-only feed. ;-)
Thanks for the link. I’ve been looking for something that could do that sort of thing. Now to see if there is something that works for things other than comics...
Reposting old posts. Reposting classic LW posts among the new posts (either in the discussion section or on the main page) can make them more accessible and can serve as a focal point for discussion. It can help introduce new readers to important background material, fill in some gaps for people who have been around LW for awhile but haven’t read everything, and refresh the minds of oldtimers who read the post long ago. The sequences are an important part of LW, and along with frequent suggestions to read the sequences and other efforts, reposting can help get people to read them.
One option: repost individual posts. It could be one of your favorites, or any old post that you consider relevant or want to discuss. Just make a post in the discussion section with a link to the old post plus an explanation of why you reposted it. The explanation could just be a sentence saying that it’s great post about X, or you could give a longer commentary on the post. Further discussion can take place in the comments to your post.
Option two: Cycle through the sequences. Start with the first post in the first sequence and repost one post per day in the discussion section. Each repost would contain a link to the original post, a link to the previous & subsequent posts in the sequence, and a one paragraph summary of the post (which could be copied directly from the wiki). Any additional comments or discussion would take place in the comments to the new post.
We could do either one of these options or both of them, or try another variation on the idea of reposting old posts. If we pin down some details, like a standardized format for the title & tags to indicate that it’s a repost, we could turn this into a convention that’s helpful for people who want to reread classic posts and easy to ignore for people who don’t.
Agreed. How often to repost will probably be a matter of experimentation, though I think one or two per week is a reasonable guess.
Decent idea that’s been brought up various times in different contexts. I think doing this right would mean not actually copying the posts, but rather rexposing them (rss feed and being on top on some page). The old comments are still useful and having multiple copies of the same data floating around is always a bad idea.
By “repost” I didn’t mean that we would copy the old post. I was thinking that someone would make a new post which contains a link to the old post and a paragraph or so about the old post (either a summary or an explanation of why it’s worth reading).
There are other ways to give old posts more exposure, but this suggestion is a simple, low-tech approach which we could start doing right now if we wanted to. We just need to decide if we want to do it, and agree on conventions for those posts (title, tags, how often, whether comments should go on the original post or the new post, etc.).
Hah, this actually reminds me of the tradition in Eastern Orthodox churches, where they will read a passage out of the New Testament every day and then discuss it (usually a one way discussion, i.e. preaching), thus reading the whole book during the year.
We could split the sequences in a similar fashion where we can cycle through them in a span of about a year.
I really like this idea. Partially because it will get me to to re-read the sequences like i’ve been planning for the last year or so.
I’ve been thinking for a long time that it would be cool to make an RSS drip feed tool that would just be a script that, given a start date, returns you a retro-feed of the original LW posting sequence, so that somebody could subscribe and get walked through all the posts in their original order, using their original spacing.
Alternately, an email autoresponder that does the summary+link thing. Some of the fancier systems even let you notice that someone has clicked on something, and then add additional materials to the sequence, so that e.g. clicking on a QM post causes you to be offered more QM posts, and so on.
The big advantage to the RSS option, though, is that it’s relatively low-tech and low processing overhead—you could serve it off a single, simple database query.
We could probably repurpose some of the Archive Binge software, although it might need work to reproduce the ‘original spacing’. (Not that I’m convinced that’s very useful. 1 every X days sounds better to me.)
Actually, it’s easier to keep the original spacing, because then all you need is a database of the posts and their original dates, and some very simple math to do the query. To do “1 every X days” means you have to fake the dates, use serial numbers, or some other such rubbish in order to find which items to put in the feed.
Easier? Hm?
I have a list of postings sans dates. Every X days
cron
runs and the head of the list is popped off into the RSS feed.I have a list of postings with dates. Whenever somebody tries to read an RSS feed, I return the entries within the appropriate time window.
IOW, my approach doesn’t store any server-side state. All the state is in the feed URL (specifying the start date). The query is something like:
Et voila. No cron. No “list”. No “feed” to have things “popped into”. If ten thousand people subscribe, there is no additional data added to a database or written to disk anywhere. And since the database is read-only, you can replicate and load-balance the service to your heart’s content.
In addition, my approach can be trivially extended to use an etag or a last-modified date that contains the date of the next post, and then avoid doing the query at all if that date hasn’t been reached yet. (Most RSS clients support sending back an ETag or If-modified-since header containing the information from the last query, so that they can skip reparsing—and this would allow the system to simply say, “nah, nothing’s changed” and not re-run the query.)
And it’s still scalable via replication—you can have as many clones running as you want, and they’ll all answer the same thing about the given feed URL (within the accuracy of their clock synchronization, of course).
Et voila.
Actually, this approach is so simple that you don’t even need a real SQL database—Google App Engine’s simple database API would suffice. Heck, the “database” itself is probably small enough to be embedded entirely within the source code, if you did a titles-only feed. ;-)
Thanks for the link. I’ve been looking for something that could do that sort of thing. Now to see if there is something that works for things other than comics...