SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
I should probably reread the papers. My brain tends to go “GAAAH” at the sight of game theory. I’m probably a bit biased because of that.