The Story Node design that I suggested permits spinning up such nodes independently, with gated input from the users. The result could be effectively the same as Lightcone attempted with LessWrong and Alignment Forum separation, just with less gatekeeping: Alignment Forum gates not only votes but also posts and comments, the latter seems particularly unreasonable and elitist to me: as if, comments on LessWrong are not worthy of attention. This, I suspect, leads to that even if somebody still uses Alignment Forum frontpage as the entry point to AI safety discussion, which I myself long abandoned in favour of LW, reading comments on Alignment Forum strictly doesn’t make sense to anyone. The design that I propose, on the other hand, separates the concerns of “who can post the content” and the usefulness/signal calculation of the content.
And this system is more general: apart from prioritising “high quality AI safety content”, more than one person wrote here that there is too much AI safety and x-risk content for them. Filtering specific tags is brittle because people apply tags rather inconsistently to their writing. So, another Story Node could be founded by these people who want to read and upvote primarily “good old rationality content” and not AI safety content.
The Story Node design that I suggested permits spinning up such nodes independently, with gated input from the users. The result could be effectively the same as Lightcone attempted with LessWrong and Alignment Forum separation, just with less gatekeeping: Alignment Forum gates not only votes but also posts and comments, the latter seems particularly unreasonable and elitist to me: as if, comments on LessWrong are not worthy of attention. This, I suspect, leads to that even if somebody still uses Alignment Forum frontpage as the entry point to AI safety discussion, which I myself long abandoned in favour of LW, reading comments on Alignment Forum strictly doesn’t make sense to anyone. The design that I propose, on the other hand, separates the concerns of “who can post the content” and the usefulness/signal calculation of the content.
And this system is more general: apart from prioritising “high quality AI safety content”, more than one person wrote here that there is too much AI safety and x-risk content for them. Filtering specific tags is brittle because people apply tags rather inconsistently to their writing. So, another Story Node could be founded by these people who want to read and upvote primarily “good old rationality content” and not AI safety content.