SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Interesting. Why do you think so?