Speaking to the value of secrecy and how this disincentivizes the make-it-prestigious|hire-lots-of-talent path, perhaps the X-risk family of EA should throw money at mechanism design experts and research people from DARPA or the intelligence community for proposals on building highly secure research orgs, then throw money at talent under the good-secrecy org.
It isn’t as though “we need you to help stop the AI apocalypse” is that different a pitch from “we need you to help stop the bioweapon apocalypse” or “we need you to help stop the nuclear apocalypse” pitches the government has been using since WW2. I would rate the latter two as mostly successful on the secrecy front, evaluated independently from effectiveness.
Speaking to the value of secrecy and how this disincentivizes the make-it-prestigious|hire-lots-of-talent path, perhaps the X-risk family of EA should throw money at mechanism design experts and research people from DARPA or the intelligence community for proposals on building highly secure research orgs, then throw money at talent under the good-secrecy org.
It isn’t as though “we need you to help stop the AI apocalypse” is that different a pitch from “we need you to help stop the bioweapon apocalypse” or “we need you to help stop the nuclear apocalypse” pitches the government has been using since WW2. I would rate the latter two as mostly successful on the secrecy front, evaluated independently from effectiveness.