A new forum devoted to FAI risks rapidly running out of quality material, if it just recruits a few people from LW. It needs outsiders from relevant fields, like AGI, non-SIAI machine ethics, and “decision neuroscience”, to have a chance of sustainability, and these new recruits will be at risk of fleeing the project if it comes packaged with the standard LW eschatology of immortality and a utilitronium cosmos, which will sound simultaneously fanatical and frivolous to someone engaged in hard expert work. I don’t think we’re ready for this; it sounds like at least six months’ work to develop a clear intention for the site, decide who to invite and how to invite them, and otherwise settle into the necessary sobriety of outlook.
Meanwhile, you could make a post like Luke has done, explaining your objective and the proposed ingredients.
A new forum devoted to FAI risks rapidly running out of quality material, if it just recruits a few people from LW. It needs outsiders from relevant fields, like AGI, non-SIAI machine ethics, and “decision neuroscience”, to have a chance of sustainability, and these new recruits will be at risk of fleeing the project if it comes packaged with the standard LW eschatology of immortality and a utilitronium cosmos, which will sound simultaneously fanatical and frivolous to someone engaged in hard expert work. I don’t think we’re ready for this; it sounds like at least six months’ work to develop a clear intention for the site, decide who to invite and how to invite them, and otherwise settle into the necessary sobriety of outlook.
Meanwhile, you could make a post like Luke has done, explaining your objective and the proposed ingredients.