Do you think a LW subreddit devoted to FAI could work? If not, then we probably aren’t ready for the site you suggest, and the default venue for such dialogues should continue to be LW Discussion.
I think a LW subreddit devoted to FAI could potentially be very frustrating. The majority of FAI-related posts that I’ve seen on LW Discussion are pretty bad and get upvoted anyway (though not much). Do you think Discussion is an adequate forum for now?
A new forum devoted to FAI risks rapidly running out of quality material, if it just recruits a few people from LW. It needs outsiders from relevant fields, like AGI, non-SIAI machine ethics, and “decision neuroscience”, to have a chance of sustainability, and these new recruits will be at risk of fleeing the project if it comes packaged with the standard LW eschatology of immortality and a utilitronium cosmos, which will sound simultaneously fanatical and frivolous to someone engaged in hard expert work. I don’t think we’re ready for this; it sounds like at least six months’ work to develop a clear intention for the site, decide who to invite and how to invite them, and otherwise settle into the necessary sobriety of outlook.
Meanwhile, you could make a post like Luke has done, explaining your objective and the proposed ingredients.
Do you think a LW subreddit devoted to FAI could work? If not, then we probably aren’t ready for the site you suggest, and the default venue for such dialogues should continue to be LW Discussion.
Probably not. There are too many things that can’t be said about FAI in a SIAI affiliated blog for political reasons. It would be lame.
What if the subreddit was an actual reddit subreddit?
I think a LW subreddit devoted to FAI could potentially be very frustrating. The majority of FAI-related posts that I’ve seen on LW Discussion are pretty bad and get upvoted anyway (though not much). Do you think Discussion is an adequate forum for now?
I should use this opportunity to quit LW for a whiie.
A new forum devoted to FAI risks rapidly running out of quality material, if it just recruits a few people from LW. It needs outsiders from relevant fields, like AGI, non-SIAI machine ethics, and “decision neuroscience”, to have a chance of sustainability, and these new recruits will be at risk of fleeing the project if it comes packaged with the standard LW eschatology of immortality and a utilitronium cosmos, which will sound simultaneously fanatical and frivolous to someone engaged in hard expert work. I don’t think we’re ready for this; it sounds like at least six months’ work to develop a clear intention for the site, decide who to invite and how to invite them, and otherwise settle into the necessary sobriety of outlook.
Meanwhile, you could make a post like Luke has done, explaining your objective and the proposed ingredients.