Agreed. I should have stated this as an implicit premise in my reasoning; if FAI research shouldn’t be pursued, then the SIAI should probably be dissolved and its resources directed to more useful approaches.
Probably not a good assumption; they’ve changed approaches before (in their earliest days, the idea of FAI hadn’t been invented yet, and they were about getting to the Singularity, any Singularity, as quickly as possible). If, hypothetically, there arose some very convincing evidence that FAI is a suboptimal approach to existential risk reduction, then they could change again but retain their network of donors and smart people and so forth. Probably won’t need to happen, but still, shutting down SIAI wouldn’t be the only option (let alone the best option) if turned out that FAI was a bad idea.
Probably not a good assumption; they’ve changed approaches before (in their earliest days, the idea of FAI hadn’t been invented yet, and they were about getting to the Singularity, any Singularity, as quickly as possible). If, hypothetically, there arose some very convincing evidence that FAI is a suboptimal approach to existential risk reduction, then they could change again but retain their network of donors and smart people and so forth. Probably won’t need to happen, but still, shutting down SIAI wouldn’t be the only option (let alone the best option) if turned out that FAI was a bad idea.