The bulk of this is about a vague impression that SIAI isn’t transparent and accountable. You gave one concrete example of something they could improve: having a list of their mistakes on their website. This isn’t a bad idea, but AFAIK GiveWell is about the only charity that currently does this, so it doesn’t seem like a specific failure on SIAI’s part not to include this. So why the feeling that they’re not transparent and accountable?
SIAI’s always done a good job of letting people know exactly how it’s raising awareness—you can watch the Summit videos yourself it you want. They could probably do a bit more to publish appropriate financial records, but I don’t think that’s your real objection. Besides that, what? Anti-TB charities can measure how much less TB there is per dollar invested; SIAI can’t measure what percentage safer the world is, since the world-saving is still in basic research phase. You can’t measure the value of the Manhattan Project in “cities destroyed per year” while it’s still going on.
By the Outside View, charities that can easily measure their progress with statistics like “cases of TB prevented” are better than those that can’t. By the Outside View, charities that employ people who don’t sound like megalomaniacal mad scientists are better than those that employ people who do. By the Outside View, charities that don’t devote years of work to growing and raising awareness before really starting working full-time on their mission are better than ones that do. By the Outside View, SIAI is a sucky charity, and they know it.
There are some well-documented situations where the Outside View is superior to the Inside View, but there are also a lot of cases where it isn’t—a naive Outside Viewist would have predicted Obama would’ve lost the election, even when he was way ahead in the polls, because by the Outside View black people don’t become President. To the degree you have additional evidence, and to the degree that you trust yourself to only say you have additional evidence when you actually have additional evidence and not when you’re trying to make excuses for yourself, the Inside View is superior to the Outside View. The Less Wrong Sequences are several hundred really comprehensive blog posts worth of additional evidence trying to convey Inside information on why SIAI and its strategy aren’t as crazy as they sound; years of interacting with SIAI people is Inside information on whether they’re honest and committed. I think these suffice to shift my probability estimates: not all the way, but preventing the apocalypse is the sort of thing one only needs a small probability to start thinking about.
The other Outside View argument would be that, whether or not you trust SIAI, it’s more important to signal that you only donate to transparent and accountable organizations, in order to TDT your way into making other people only donate to transparent and accountable organizations and convince all charities to become transparent and accountable. This is a noble idea, but the world being destroyed by unfriendly AI would throw a wrench into the “improve charity” plan, so this would be an excellent time to break your otherwise reasonable rule.
In addition to the points that I made in my other response to your comment, I would add that the SIAI staff have not created an environment which welcomes criticism from outsiders.
The points in my other response were well considered and yet as I write, the response has been down voted three times so that it is now hidden from view.
I see Eliezer’s initial response to XiXiDu’s post Should I believe what the SIAI claims? as evidence that the SIAI staff have gotten in the habit of dismissing criticisms out of hand whenever they question the credibility of the critic. This habit is justified up to a point, but my interactions with the SIAI staff have convinced me that (as a group) they go too way far, creating a selection effect which subjects themselves to confirmation bias and group think.
In SIAI’s defense, I’ll make the following three points:
I see these things as weak indications that SIAI may take my criticisms seriously. Nevertheless, I perceive SIAI’s openness to criticism up until this point to be far lower than GiveWell’s openness to criticism up until this point.
As you’ll notice, the GiveWell staff are not arbitrarily responsive to criticism—if they were then they would open themselves up to the possibility of never getting anything done. But their standard for responsiveness is much higher than anything that I’ve seen from SIAI. For example, compare Holden’s response to Laura Deaton’s (strong) criticism with Eliezer’s response to my top level post.
In order for me to feel comfortable donating to SIAI I would need to see SIAI staff exhibiting a level of responsiveness and engagement comparable to the level that the GiveWell staff have exhibited in the links above.
So why the feeling that they’re not transparent and accountable?
•As I discuss in Other Existential Risks I feel that SIAI has not (yet) provided a compelling argument for the idea that focus on AI is the most cost-effective way of reducing existential risk. Obviously I don’t expect an airtight argument (as it would be impossible to offer one), but I do feel SIAI needs to say a lot more on this point. I’m encouraged that SIAI staff have informed me that more information on this point will be forthcoming in future blog posts.
•I agree with SarahC’s remarks here and here on the subject of there being a “problem with connecting to the world of professional science.”
•I agree with you that SIAI and its strategy aren’t as crazy as they sound at first blush. I also agree that the Less Wrong sequences suffice to establish that SIAI has some people of very high intellectual caliber and that this distinguishes SIAI from most charities. At present, despite these facts I’m very skeptical of the idea that SIAI’s approach to reducing existential risk is the optimal one for the reasons given in the two bullet points above.
•Regarding:
This is a noble idea, but the world being destroyed by unfriendly AI would throw a wrench into the “improve charity” plan, so this would be an excellent time to break your otherwise reasonable rule.
Here the question is just whether there’s sufficient evidence that donating to SIAI has sufficiently high (carefully calibrated) expected value to warrant ignoring incentive effects associated with high focus on transparency and accountability.
I personally am very skeptical that this question has an affirmative answer. I may be wrong.
The bulk of this is about a vague impression that SIAI isn’t transparent and accountable. You gave one concrete example of something they could improve: having a list of their mistakes on their website. This isn’t a bad idea, but AFAIK GiveWell is about the only charity that currently does this, so it doesn’t seem like a specific failure on SIAI’s part not to include this. So why the feeling that they’re not transparent and accountable?
SIAI’s always done a good job of letting people know exactly how it’s raising awareness—you can watch the Summit videos yourself it you want. They could probably do a bit more to publish appropriate financial records, but I don’t think that’s your real objection. Besides that, what? Anti-TB charities can measure how much less TB there is per dollar invested; SIAI can’t measure what percentage safer the world is, since the world-saving is still in basic research phase. You can’t measure the value of the Manhattan Project in “cities destroyed per year” while it’s still going on.
By the Outside View, charities that can easily measure their progress with statistics like “cases of TB prevented” are better than those that can’t. By the Outside View, charities that employ people who don’t sound like megalomaniacal mad scientists are better than those that employ people who do. By the Outside View, charities that don’t devote years of work to growing and raising awareness before really starting working full-time on their mission are better than ones that do. By the Outside View, SIAI is a sucky charity, and they know it.
There are some well-documented situations where the Outside View is superior to the Inside View, but there are also a lot of cases where it isn’t—a naive Outside Viewist would have predicted Obama would’ve lost the election, even when he was way ahead in the polls, because by the Outside View black people don’t become President. To the degree you have additional evidence, and to the degree that you trust yourself to only say you have additional evidence when you actually have additional evidence and not when you’re trying to make excuses for yourself, the Inside View is superior to the Outside View. The Less Wrong Sequences are several hundred really comprehensive blog posts worth of additional evidence trying to convey Inside information on why SIAI and its strategy aren’t as crazy as they sound; years of interacting with SIAI people is Inside information on whether they’re honest and committed. I think these suffice to shift my probability estimates: not all the way, but preventing the apocalypse is the sort of thing one only needs a small probability to start thinking about.
The other Outside View argument would be that, whether or not you trust SIAI, it’s more important to signal that you only donate to transparent and accountable organizations, in order to TDT your way into making other people only donate to transparent and accountable organizations and convince all charities to become transparent and accountable. This is a noble idea, but the world being destroyed by unfriendly AI would throw a wrench into the “improve charity” plan, so this would be an excellent time to break your otherwise reasonable rule.
In addition to the points that I made in my other response to your comment, I would add that the SIAI staff have not created an environment which welcomes criticism from outsiders.
The points in my other response were well considered and yet as I write, the response has been down voted three times so that it is now hidden from view.
I see Eliezer’s initial response to XiXiDu’s post Should I believe what the SIAI claims? as evidence that the SIAI staff have gotten in the habit of dismissing criticisms out of hand whenever they question the credibility of the critic. This habit is justified up to a point, but my interactions with the SIAI staff have convinced me that (as a group) they go too way far, creating a selection effect which subjects themselves to confirmation bias and group think.
In SIAI’s defense, I’ll make the following three points:
•Michael Vassar corresponded with me extensively in response to the criticisms of SIAI which I raised in the comments to my post (One reason) why capitalism is much maligned and Roko’s post Public Choice and the Altruist’s Burden.
•As prase remarks, the fact that my top level posts have not been censored are an indication that “LW is still far from Objectivism.”
•SIAI staff member Jasen sent me a private message thanking me for making my two posts Existential Risk and Public Relations and Other Existential Risks and explaining that SIAI plans to address these points in the future.
I see these things as weak indications that SIAI may take my criticisms seriously. Nevertheless, I perceive SIAI’s openness to criticism up until this point to be far lower than GiveWell’s openness to criticism up until this point.
For examples of GiveWell’s openness to criticism, see Holden Karnofsky’s back and forth with Laura Deaton here and the threads on the GiveWell research mailing list on Cost-Effectiveness Estimates, Research Priorities and Plans and Environmental Concerns and International Aid as well as Holden’s posting Population growth & health.
As you’ll notice, the GiveWell staff are not arbitrarily responsive to criticism—if they were then they would open themselves up to the possibility of never getting anything done. But their standard for responsiveness is much higher than anything that I’ve seen from SIAI. For example, compare Holden’s response to Laura Deaton’s (strong) criticism with Eliezer’s response to my top level post.
In order for me to feel comfortable donating to SIAI I would need to see SIAI staff exhibiting a level of responsiveness and engagement comparable to the level that the GiveWell staff have exhibited in the links above.
Yvain,
Thanks for your feedback.
•As I discuss in Other Existential Risks I feel that SIAI has not (yet) provided a compelling argument for the idea that focus on AI is the most cost-effective way of reducing existential risk. Obviously I don’t expect an airtight argument (as it would be impossible to offer one), but I do feel SIAI needs to say a lot more on this point. I’m encouraged that SIAI staff have informed me that more information on this point will be forthcoming in future blog posts.
•I agree with SarahC’s remarks here and here on the subject of there being a “problem with connecting to the world of professional science.”
•I agree with you that SIAI and its strategy aren’t as crazy as they sound at first blush. I also agree that the Less Wrong sequences suffice to establish that SIAI has some people of very high intellectual caliber and that this distinguishes SIAI from most charities. At present, despite these facts I’m very skeptical of the idea that SIAI’s approach to reducing existential risk is the optimal one for the reasons given in the two bullet points above.
•Regarding:
Here the question is just whether there’s sufficient evidence that donating to SIAI has sufficiently high (carefully calibrated) expected value to warrant ignoring incentive effects associated with high focus on transparency and accountability.
I personally am very skeptical that this question has an affirmative answer. I may be wrong.