This is super cool! I cannot wait to give this a go on my next post. Having access to a high-quality feedback loop is such a powerful way to improve our writing and hence our thinking.
Assuming this results in better quality posts being created on LessWrong, I, therefore, wonder if it means more posts will be promoted to Frontpage as a result or if the Frontpage promotion will be made more stringent.
[The rest of this comment is off-topic, feel free to ignore it.]
I also wonder how this feature will scale as LessWrong continues to grow.
More broadly, I’ve been thinking about the scalability of the LessWrong model ever since I read the book The Constitution of Knowledge by Jonathan Rauch. I am very impressed with LessWrong as a platform, not only in terms of the integrity of discussions but also in terms of the software quality and moderation. I emailed Jonathan Rauch and brought LessWrong to his attention. He dedicates a significant portion of the book to product designs that incentivise civil and explanatory discussions and the graceful collision of ideas without siloing users into bubbles.
Imagine if LessWrong had 2.89 billion monthly active users (which is Facebook’s MAO), could it handle that scale with its current product design and governance model? If not, what changes and innovations would be needed? My understanding is that Facebook is hiring a lot of human moderators to deal with misinformation and disinformation. I wonder when AI will be sufficiently intelligent to help with things like moderation and proofreading effectively.
This is super cool! I cannot wait to give this a go on my next post. Having access to a high-quality feedback loop is such a powerful way to improve our writing and hence our thinking.
Assuming this results in better quality posts being created on LessWrong, I, therefore, wonder if it means more posts will be promoted to Frontpage as a result or if the Frontpage promotion will be made more stringent.
[The rest of this comment is off-topic, feel free to ignore it.]
I also wonder how this feature will scale as LessWrong continues to grow.
More broadly, I’ve been thinking about the scalability of the LessWrong model ever since I read the book The Constitution of Knowledge by Jonathan Rauch. I am very impressed with LessWrong as a platform, not only in terms of the integrity of discussions but also in terms of the software quality and moderation. I emailed Jonathan Rauch and brought LessWrong to his attention. He dedicates a significant portion of the book to product designs that incentivise civil and explanatory discussions and the graceful collision of ideas without siloing users into bubbles.
Imagine if LessWrong had 2.89 billion monthly active users (which is Facebook’s MAO), could it handle that scale with its current product design and governance model? If not, what changes and innovations would be needed? My understanding is that Facebook is hiring a lot of human moderators to deal with misinformation and disinformation. I wonder when AI will be sufficiently intelligent to help with things like moderation and proofreading effectively.