also, sharing the comment with arbitrary people is a fairly obvious feature here
if one goes in this direction, then a natural next step might be the ability to organize private multi-party discussion...
the main use case would be as follows, in my opinion: there are plenty of sensitive technical topics in the field of AI existential safety; in particular, many important approaches and techniques are of dual use, they can be used to improve safety and they can be used to boost capabilities; “security via relative obscurity” is one possible approach here, but LessWrong does not currently have any mechanisms to support additional levels of “security via relative obscurity”...
the question is: what would be a downside of having something like that? let’s ponder this...
if one goes in this direction, then a natural next step might be the ability to organize private multi-party discussion...
the main use case would be as follows, in my opinion: there are plenty of sensitive technical topics in the field of AI existential safety; in particular, many important approaches and techniques are of dual use, they can be used to improve safety and they can be used to boost capabilities; “security via relative obscurity” is one possible approach here, but LessWrong does not currently have any mechanisms to support additional levels of “security via relative obscurity”...
the question is: what would be a downside of having something like that? let’s ponder this...