I don’t know if this a taboo subject or what, but I’m curious. What makes you include SIAI in this category? (If you’d rather not discuss it on LessWrong, you can e-mail me at mainline dot express at gmail.)
Donating to SIAI is pure display of tribal affiliation, and these are a zero sum game. They have nothing to show for it, and there’s not even any real reason to think this reduces rather than increasing existential risk.
If you really care about reducing existential risk, seed vaults and asteroid tracking are two obvious programs that both definitely work at decreasing the risk, and don’t cost much.
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
The field of AI has been littered with (metaphorical) corpses since the 1960′s. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false—especially if it concerns “general” intelligence or “human-level” intelligence. So, Eliezer is probably wrong just like everyone else. That’s not a particular criticism of him; it still puts him in august company.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
What I don’t like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like—I’ve had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don’t do that at ALL. You have to admire the honesty, even if you’re skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do—I am very, very confident of that.
Offer these people the respect (or charity, if you will) of judging their ideas on the merits—or, if you don’t have time to look into the ideas, mark that as ignorance on your part. You seem to be saying “They must be wrong because they’re weird.” The thing is, they’re working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You’ve got to revise your “Don’t believe weirdos” prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don’t have it all sewn up.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
I don’t think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn’t give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?
The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.
I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that’s the compensation you expect from charities.
Clearly a few hours wouldn’t be enough for me to get a level of knowledge comparable to experts. It could definitely move my probability estimate a lot.
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Donating to SIAI is pure display of tribal affiliation
That just isn’t true. It is partially a display of tribal affiliation.
They have nothing to show for it, and there’s not even any real reason to think this reduces rather than increasing existential risk.
Even if the SIAI outright increased existential risk that would not mean donations were purely displays of affiliation. It would mean that all those who donated partially for practical instrumental reasons were mistaken and making a poor choice. It would not make their act any more purely an affiliation symbol.
If I was to donate (more) to the SIAI it would be a mix of:
Tribal affiliation.
Reciprocation. (They gave me a free bootcamp and airplane ticket.)
Actually not having a better idea of a way to not die.
EDIT: Downvoting this post sort of confirms my point that it’s all about signaling tribal affiliations.
If people downvoting you is evidence that you are right then would people upvoting you have been evidence that you were wrong? Or does this kind of ‘confirmation’ not get conserved the way that evidence does?
And the evidence that donating to SIAI does anything other than signal affiliation is...?
… not required to refute your claim. It’s a goal post shift. In fact I explicitly allowed for the SIAI being utterly useless or worse than useless in the comment to which you replied. The claim I rejected is this:
Donating to SIAI is pure display of tribal affiliation
For that to be true it would require that there is nobody who believes that the SIAI does something useful and whose donating behaviour is best modelled as at least somewhat influenced by the desire to achieve the overt goal.
You also require that there are no other causal influences behind the decision including forms of signalling other than tribal affiliation. I have already mentioned “reciprocation” as a non “tribal affiliation” motivating influence. Even if I decided that the SIAI were completely unworthy of my affiliation I would find it difficult to suppress the instinct to pay back at least some of what they gave me.
The SIAI has received anonymous donations. (The relevance should be obvious.)
Beliefs based on little evidence that people outside of tribe find extremely weird are one of the main forms of signaling tribal affiliation. Taking Jesus story seriously is how people signal belonging to one of Christian tribes, and taking unfriendly AI story seriously is how people signal belonging to one of lesswrong tribe.
No goal post are being shifted here. Donating to SIAI because one believes lesswrong tribal stories is signaling that you have these tribal-marker beliefs, and still counts as pure 100% tribal affiliation signaling.
My reference here would be a fund to build world’s largest Jesus statue. These seems to be this largest Jesus contest ongoing, the record was broken twice in just a year, in Poland then in Peru, and now some Croatian group is trying to outdo them both. People who donate to these efforts might honestly belief this is a good idea. Details why they believe so are highly complex, but this is a tribal-marker belief and nothing more.
Virtually nobody who’s not a local Catholic considers it such, just like virtually nobody who’s not sharing “lesswrongian meme complex” considers what SIAI is doing a particularly good idea. I’m sure these funds got plenty of anonymous donations from local Catholics, and maybe some small amount of money from off-tribal people (e.g. “screw religion, but huge Jesus will be great for tourism here” / “friendly AI is almost certainly bullshit, but weirdos are worth funding by Pascal wager”), this doesn’t really change anything.
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions signaling tribal affiliation, regardless of how conscious this is.
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions [solely for] signaling tribal affiliation, regardless of how conscious this is.
There are other reasons why someone could downvote your post. You immediately assuming that it’s about tribal affiliations sort of demonstrates the problem with your claim that it’s all about tribal affiliations.
They’ve published papers. Presumably if we didn’t donate anything, they couldn’t publish papers. They also hand out paychecks to Eliezer. Eliezer is a tribal leader, so we want him to succeed! Between those two, we have proof that they’re doing more than just signalling affiliation.
The far better question is whether they’re doing something useful with that money, and whether it would be better spent elsewhere. That, I do not feel qualified to answer. I think even Give Well gave up on that one.
I’m a big fan of the very loosely related http://longnow.org/ although their major direct project is building a very nice clock.
They definitely try to promote the kind of thinking that will result in things like seed vaults though
(I’m a member)
My personal estimate is that better environmental and energy policies would reduce existential risk, but I haven’t seen any appealing organisations in this area.
So am I :)
Just got my steel card last week, actually.
I had a wonderful moment several months back when I was wandering about in the science museum in London… and stumbled across their prototype clock… SO cool!
I don’t know if this a taboo subject or what, but I’m curious. What makes you include SIAI in this category? (If you’d rather not discuss it on LessWrong, you can e-mail me at mainline dot express at gmail.)
Donating to SIAI is pure display of tribal affiliation, and these are a zero sum game. They have nothing to show for it, and there’s not even any real reason to think this reduces rather than increasing existential risk.
If you really care about reducing existential risk, seed vaults and asteroid tracking are two obvious programs that both definitely work at decreasing the risk, and don’t cost much.
Just weighing in here:
SIAI is an organization built around a particular set of theories about AI—theories not all AI researchers share. If SIAI’s theories are right, they are the most important organization in the world. If they’re wrong, they’re unimportant.
The field of AI has been littered with (metaphorical) corpses since the 1960′s. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false—especially if it concerns “general” intelligence or “human-level” intelligence. So, Eliezer is probably wrong just like everyone else. That’s not a particular criticism of him; it still puts him in august company.
So my particular position is that I’m not giving to SIAI until I’m worth enough financially that I can ask a few hours of Eliezer’s time, and get a better idea of whether the theories are correct.
What I don’t like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like—I’ve had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don’t do that at ALL. You have to admire the honesty, even if you’re skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do—I am very, very confident of that.
Offer these people the respect (or charity, if you will) of judging their ideas on the merits—or, if you don’t have time to look into the ideas, mark that as ignorance on your part. You seem to be saying “They must be wrong because they’re weird.” The thing is, they’re working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You’ve got to revise your “Don’t believe weirdos” prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don’t have it all sewn up.
I don’t think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn’t give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?
The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.
I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that’s the compensation you expect from charities.
Clearly a few hours wouldn’t be enough for me to get a level of knowledge comparable to experts. It could definitely move my probability estimate a lot.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer’s time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I’d prefer to get this data independently.
ETA. Personally I’ve given some money to SI, but it’s largely based on previous successes and not on a clear agenda of future direction. I’m ok with this, but it’s possibly sub-optimal for getting others to contribute (or getting me to contribute more).
I should probably reread the papers. My brain tends to go “GAAAH” at the sight of game theory. I’m probably a bit biased because of that.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
The main claim that needs to be evaluated is “AI is an existential risk,” and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I’m not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn’t making much progress. Pretty low priority.
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer’s rationality outreach efforts alone are enough to justify its existence. (And I’m not sure they would be as effective without the motivation of this “secret agenda”.)
Interesting. Why do you think so?
That just isn’t true. It is partially a display of tribal affiliation.
Even if the SIAI outright increased existential risk that would not mean donations were purely displays of affiliation. It would mean that all those who donated partially for practical instrumental reasons were mistaken and making a poor choice. It would not make their act any more purely an affiliation symbol.
If I was to donate (more) to the SIAI it would be a mix of:
Tribal affiliation.
Reciprocation. (They gave me a free bootcamp and airplane ticket.)
Actually not having a better idea of a way to not die.
And the evidence that donating to SIAI does anything other than signal affiliation is...?
EDIT: Downvoting this post sort of confirms my point that it’s all about signaling tribal affiliations.
If people downvoting you is evidence that you are right then would people upvoting you have been evidence that you were wrong? Or does this kind of ‘confirmation’ not get conserved the way that evidence does?
… not required to refute your claim. It’s a goal post shift. In fact I explicitly allowed for the SIAI being utterly useless or worse than useless in the comment to which you replied. The claim I rejected is this:
For that to be true it would require that there is nobody who believes that the SIAI does something useful and whose donating behaviour is best modelled as at least somewhat influenced by the desire to achieve the overt goal.
You also require that there are no other causal influences behind the decision including forms of signalling other than tribal affiliation. I have already mentioned “reciprocation” as a non “tribal affiliation” motivating influence. Even if I decided that the SIAI were completely unworthy of my affiliation I would find it difficult to suppress the instinct to pay back at least some of what they gave me.
The SIAI has received anonymous donations. (The relevance should be obvious.)
Beliefs based on little evidence that people outside of tribe find extremely weird are one of the main forms of signaling tribal affiliation. Taking Jesus story seriously is how people signal belonging to one of Christian tribes, and taking unfriendly AI story seriously is how people signal belonging to one of lesswrong tribe.
No goal post are being shifted here. Donating to SIAI because one believes lesswrong tribal stories is signaling that you have these tribal-marker beliefs, and still counts as pure 100% tribal affiliation signaling.
My reference here would be a fund to build world’s largest Jesus statue. These seems to be this largest Jesus contest ongoing, the record was broken twice in just a year, in Poland then in Peru, and now some Croatian group is trying to outdo them both. People who donate to these efforts might honestly belief this is a good idea. Details why they believe so are highly complex, but this is a tribal-marker belief and nothing more.
Virtually nobody who’s not a local Catholic considers it such, just like virtually nobody who’s not sharing “lesswrongian meme complex” considers what SIAI is doing a particularly good idea. I’m sure these funds got plenty of anonymous donations from local Catholics, and maybe some small amount of money from off-tribal people (e.g. “screw religion, but huge Jesus will be great for tourism here” / “friendly AI is almost certainly bullshit, but weirdos are worth funding by Pascal wager”), this doesn’t really change anything.
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions signaling tribal affiliation, regardless of how conscious this is.
(Edit based on context)
This statement is either false or useless.
There are other reasons why someone could downvote your post. You immediately assuming that it’s about tribal affiliations sort of demonstrates the problem with your claim that it’s all about tribal affiliations.
They’ve published papers. Presumably if we didn’t donate anything, they couldn’t publish papers. They also hand out paychecks to Eliezer. Eliezer is a tribal leader, so we want him to succeed! Between those two, we have proof that they’re doing more than just signalling affiliation.
The far better question is whether they’re doing something useful with that money, and whether it would be better spent elsewhere. That, I do not feel qualified to answer. I think even Give Well gave up on that one.
Really? I thought we wanted the tribal leader to fail in a way that allowed ourselves or someone we have more influence over to take his place.
Or we want the tribal leader to be conveniently martyred at their moment of greatest impact. You know, for the good of the cause.
I think that depends on how we perceive the size of the tribe, our position within it, and the security of its status in the outside world...
Sounds interesting. Do you have links for charities of this sort that you recommend?
I’m a big fan of the very loosely related http://longnow.org/ although their major direct project is building a very nice clock.
They definitely try to promote the kind of thinking that will result in things like seed vaults though
(I’m a member)
My personal estimate is that better environmental and energy policies would reduce existential risk, but I haven’t seen any appealing organisations in this area.
So am I :) Just got my steel card last week, actually.
I had a wonderful moment several months back when I was wandering about in the science museum in London… and stumbled across their prototype clock… SO cool!
What’s more, the tribal affiliation might not be a “display” to others.
Hence wedfrifid leaving that word out of his bullet point list.