The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Interesting. Why do you think so?