SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
The field of AI has been littered with (metaphorical) corpses since the 1960′s. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false—especially if it concerns “general” intelligence or “human-level” intelligence. So, Eliezer is probably wrong just like everyone else. That’s not a particular criticism of him; it still puts him in august company.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
What I don’t like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like—I’ve had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don’t do that at ALL. You have to admire the honesty, even if you’re skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do—I am very, very confident of that.
Offer these people the respect (or charity, if you will) of judging their ideas on the merits—or, if you don’t have time to look into the ideas, mark that as ignorance on your part. You seem to be saying “They must be wrong because they’re weird.” The thing is, they’re working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You’ve got to revise your “Don’t believe weirdos” prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don’t have it all sewn up.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
I don’t think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn’t give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?
The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.
I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that’s the compensation you expect from charities.
Clearly a few hours wouldn’t be enough for me to get a level of knowledge comparable to experts. It could definitely move my probability estimate a lot.
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Just weighing in here:
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
The field of AI has been littered with (metaphorical) corpses since the 1960′s. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false—especially if it concerns “general” intelligence or “human-level” intelligence. So, Eliezer is probably wrong just like everyone else. That’s not a particular criticism of him; it still puts him in august company.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
What I don’t like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like—I’ve had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don’t do that at ALL. You have to admire the honesty, even if you’re skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do—I am very, very confident of that.
Offer these people the respect (or charity, if you will) of judging their ideas on the merits—or, if you don’t have time to look into the ideas, mark that as ignorance on your part. You seem to be saying “They must be wrong because they’re weird.” The thing is, they’re working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You’ve got to revise your “Don’t believe weirdos” prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don’t have it all sewn up.
I don’t think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn’t give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?
The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.
I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that’s the compensation you expect from charities.
Clearly a few hours wouldn’t be enough for me to get a level of knowledge comparable to experts. It could definitely move my probability estimate a lot.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
I should probably reread the papers. My brain tends to go “GAAAH” at the sight of game theory. I’m probably a bit biased because of that.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Interesting. Why do you think so?