I’d like to be able to make groups, such as other users who seem interested in AI in China and have had good approaches to the topic in the past. If there is a reliable way to automatically add a wide variety of specific people to such a list, e.g. people who write posts and comments with tags (or even key words like “china” or “intelligence agency”) that indicates that they frequently write about a specific domain like AI policy.
I’d like to be able to make comments that new users, e.g. ones with the sprout icon, cannot see until they no longer have the sprout icon. If AI safety blows up then the number of people on Lesswrong will blow up too, which means an increase in the absolute number of people on LW who I don’t trust (absolute, not relative). Karma-based restrictions would be neat but I don’t know how to make that work.
I’d like to be able to be able to make certain kinds of comments that exclude certain users from a list, and I add people to that list whenever they engage in bad-actor behavior. There should be a duration option e.g. 6 months, and/or it should be very easy to put people on the bad-actor list and even easier to take them off the list, or easily reduce the number of remaining months on the list if I see indicators that they aren’t bad actors anymore or weren’t as bad as I thought.
This comment reminded me: I get a lot of value from Twitter DMs and groupchats. More value than I get from the actual feed, in fact, which—according to my revealed preferences—is worth multiple hours per day. Groupchats on LessWrong have promise.
Why would LW need a group-chat (or more developed DM) function? If you want non-public conversations with select LW members, can’t you do that today, on twitter, discord, slack, or e-mail?
To paraphrase Douglas Adams, I object partly because it is a debasement of open discussion, but mostly because I don’t get invited to those sorts of parties.
I’d like to be able to make groups, such as other users who seem interested in AI in China and have had good approaches to the topic in the past. If there is a reliable way to automatically add a wide variety of specific people to such a list, e.g. people who write posts and comments with tags (or even key words like “china” or “intelligence agency”) that indicates that they frequently write about a specific domain like AI policy.
I’d like to be able to make comments that new users, e.g. ones with the sprout icon, cannot see until they no longer have the sprout icon. If AI safety blows up then the number of people on Lesswrong will blow up too, which means an increase in the absolute number of people on LW who I don’t trust (absolute, not relative). Karma-based restrictions would be neat but I don’t know how to make that work.
I’d like to be able to be able to make certain kinds of comments that exclude certain users from a list, and I add people to that list whenever they engage in bad-actor behavior. There should be a duration option e.g. 6 months, and/or it should be very easy to put people on the bad-actor list and even easier to take them off the list, or easily reduce the number of remaining months on the list if I see indicators that they aren’t bad actors anymore or weren’t as bad as I thought.
This comment reminded me: I get a lot of value from Twitter DMs and groupchats. More value than I get from the actual feed, in fact, which—according to my revealed preferences—is worth multiple hours per day. Groupchats on LessWrong have promise.
Note LessWrong has group chat – it’s in the conversation options button after you start a chat with one person.
Why would LW need a group-chat (or more developed DM) function? If you want non-public conversations with select LW members, can’t you do that today, on twitter, discord, slack, or e-mail?
Yes, of course you could. The concern is user friction.
To paraphrase Douglas Adams, I object partly because it is a debasement of open discussion, but mostly because I don’t get invited to those sorts of parties.