Holden Karnofsky’s Singularity Institute critique: Is SI the kind of organization we want to bet on?
The sheer length of GiveWell co-founder and co-executive director Holden Karnofsky’s excellent critique of the Singularity Institute means that it’s hard to keep track of the resulting discussion. I propose to break out each of his objections into a separate Discussion post so that each receives the attention it deserves.
Is SI the kind of organization we want to bet on?
This part of the post has some risks. For most of GiveWell’s history, sticking to our standard criteria—and putting more energy into recommended than non-recommended organizations—has enabled us to share our honest thoughts about charities without appearing to get personal. But when evaluating a group such as SI, I can’t avoid placing a heavy weight on (my read on) the general competence, capability and “intangibles” of the people and organization, because SI’s mission is not about repeating activities that have worked in the past. Sharing my views on these issues could strike some as personal or mean-spirited and could lead to the misimpression that GiveWell is hostile toward SI. But it is simply necessary in order to be fully transparent about why I hold the views that I hold.
Fortunately, SI is an ideal organization for our first discussion of this type. I believe the staff and supporters of SI would overwhelmingly rather hear the whole truth about my thoughts—so that they can directly engage them and, if warranted, make changes—than have me sugar-coat what I think in order to spare their feelings. People who know me and my attitude toward being honest vs. sparing feelings know that this, itself, is high praise for SI.
One more comment before I continue: our policy is that non-public information provided to us by a charity will not be published or discussed without that charity’s prior consent. However, none of the content of this post is based on private information; all of it is based on information that SI has made available to the public.
There are several reasons that I currently have a negative impression of SI’s general competence, capability and “intangibles.” My mind remains open and I include specifics on how it could be changed.
Weak arguments. SI has produced enormous quantities of public argumentation, and I have examined a very large proportion of this information. Yet I have never seen a clear response to any of the three basic objections I listed in the previous section. One of SI’s major goals is to raise awareness of AI-related risks; given this, the fact that it has not advanced clear/concise/compelling arguments speaks, in my view, to its general competence.
Lack of impressive endorsements. I discussed this issue in my 2011 interview with SI representatives and I still feel the same way on the matter. I feel that given the enormous implications of SI’s claims, if it argued them well it ought to be able to get more impressive endorsements than it has.
I have been pointed to Peter Thiel and Ray Kurzweil as examples of impressive SI supporters, but I have not seen any on-record statements from either of these people that show agreement with SI’s specific views, and in fact (based on watching them speak at Singularity Summits) my impression is that they disagree. Peter Thiel seems to believe that speeding the pace of general innovation is a good thing; this would seem to be in tension with SI’s view that AGI will be catastrophic by default and that no one other than SI is paying sufficient attention to “Friendliness” issues. Ray Kurzweil seems to believe that “safety” is a matter of transparency, strong institutions, etc. rather than of “Friendliness.” I am personally in agreement with the things I have seen both of them say on these topics. I find it possible that they support SI because of the Singularity Summit or to increase general interest in ambitious technology, rather than because they find “Friendliness theory” to be as important as SI does.
Clear, on-record statements from these two supporters, specifically endorsing SI’s arguments and the importance of developing Friendliness theory, would shift my views somewhat on this point.
Resistance to feedback loops. I discussed this issue in my 2011 interview with SI representatives and I still feel the same way on the matter. SI seems to have passed up opportunities to test itself and its own rationality by e.g. aiming for objectively impressive accomplishments. This is a problem because of (a) its extremely ambitious goals (among other things, it seeks to develop artificial intelligence and “Friendliness theory” before anyone else can develop artificial intelligence); (b) its view of its staff/supporters as having unusual insight into rationality, which I discuss in a later bullet point.
SI’s list of achievements is not, in my view, up to where it needs to be given (a) and (b). Yet I have seen no declaration that SI has fallen short to date and explanation of what will be changed to deal with it. SI’s recent release of a strategic plan and monthly updates are improvements from a transparency perspective, but they still leave me feeling as though there are no clear metrics or goals by which SI is committing to be measured (aside from very basic organizational goals such as “design a new website” and very vague goals such as “publish more papers”) and as though SI places a low priority on engaging people who are critical of its views (or at least not yet on board), as opposed to people who are naturally drawn to it.
I believe that one of the primary obstacles to being impactful as a nonprofit is the lack of the sort of helpful feedback loops that lead to success in other domains. I like to see groups that are making as much effort as they can to create meaningful feedback loops for themselves. I perceive SI as falling well short on this front. Pursuing more impressive endorsements and developing benign but objectively recognizable innovations (particularly commercially viable ones) are two possible ways to impose more demanding feedback loops. (I discussed both of these in my interview linked above).
Apparent poorly grounded belief in SI’s superior general rationality. Many of the things that SI and its supporters and advocates say imply a belief that they have special insights into the nature of general rationality, and/or have superior general rationality, relative to the rest of the population. (Examples here, here and here). My understanding is that SI is in the process of spinning off a group dedicated to training people on how to have higher general rationality.
Yet I’m not aware of any of what I consider compelling evidence that SI staff/supporters/advocates have any special insight into the nature of general rationality or that they have especially high general rationality.
I have been pointed to the Sequences on this point. The Sequences (which I have read the vast majority of) do not seem to me to be a demonstration or evidence of general rationality. They are about rationality; I find them very enjoyable to read; and there is very little they say that I disagree with (or would have disagreed with before I read them). However, they do not seem to demonstrate rationality on the part of the writer, any more than a series of enjoyable, not-obviously-inaccurate essays on the qualities of a good basketball player would demonstrate basketball prowess. I sometimes get the impression that fans of the Sequences are willing to ascribe superior rationality to the writer simply because the content seems smart and insightful to them, without making a critical effort to determine the extent to which the content is novel, actionable and important.
I endorse Eliezer Yudkowsky’s statement, “Be careful … any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility.” To me, the best evidence of superior general rationality (or of insight into it) would be objectively impressive achievements (successful commercial ventures, highly prestigious awards, clear innovations, etc.) and/or accumulation of wealth and power. As mentioned above, SI staff/supporters/advocates do not seem particularly impressive on these fronts, at least not as much as I would expect for people who have the sort of insight into rationality that makes it sensible for them to train others in it. I am open to other evidence that SI staff/supporters/advocates have superior general rationality, but I have not seen it.
Why is it a problem if SI staff/supporter/advocates believe themselves, without good evidence, to have superior general rationality? First off, it strikes me as a belief based on wishful thinking rather than rational inference. Secondly, I would expect a series of problems to accompany overconfidence in one’s general rationality, and several of these problems seem to be actually occurring in SI’s case:
Insufficient self-skepticism given how strong its claims are and how little support its claims have won. Rather than endorsing “Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments,” SI seems often to endorse something more like “Others have not accepted their arguments because they have inferior general rationality,” a stance less likely to lead to improvement on SI’s part.
Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.
Paying insufficient attention to the limitations of the confidence one can have in one’s untested theories, in line with my Objection 1.
Overall disconnect between SI’s goals and its activities. SI seeks to build FAI and/or to develop and promote “Friendliness theory” that can be useful to others in building FAI. Yet it seems that most of its time goes to activities other than developing AI or theory. Its per-person output in terms of publications seems low. Its core staff seem more focused on Less Wrong posts, “rationality training” and other activities that don’t seem connected to the core goals; Eliezer Yudkowsky, in particular, appears (from the strategic plan) to be focused on writing books for popular consumption. These activities seem neither to be advancing the state of FAI-related theory nor to be engaging the sort of people most likely to be crucial for building AGI.
A possible justification for these activities is that SI is seeking to promote greater general rationality, which over time will lead to more and better support for its mission. But if this is SI’s core activity, it becomes even more important to test the hypothesis that SI’s views are in fact rooted in superior general rationality—and these tests don’t seem to be happening, as discussed above.
Theft. I am bothered by the 2009 theft of $118,803.00 (as against a $541,080.00 budget for the year). In an organization as small as SI, it really seems as though theft that large relative to the budget shouldn’t occur and that it represents a major failure of hiring and/or internal controls.
In addition, I have seen no public SI-authorized discussion of the matter that I consider to be satisfactory in terms of explaining what happened and what the current status of the case is on an ongoing basis. Some details may have to be omitted, but a clear SI-authorized statement on this point with as much information as can reasonably provided would be helpful.
A couple positive observations to add context here:
I see significant positive qualities in many of the people associated with SI. I especially like what I perceive as their sincere wish to do whatever they can to help the world as much as possible, and the high value they place on being right as opposed to being conventional or polite. I have not interacted with Eliezer Yudkowsky but I greatly enjoy his writings.
I’m aware that SI has relatively new leadership that is attempting to address the issues behind some of my complaints. I have a generally positive impression of the new leadership; I believe the Executive Director and Development Director, in particular, to represent a step forward in terms of being interested in transparency and in testing their own general rationality. So I will not be surprised if there is some improvement in the coming years, particularly regarding the last couple of statements listed above. That said, SI is an organization and it seems reasonable to judge it by its organizational track record, especially when its new leadership is so new that I have little basis on which to judge these staff.
Wrapup
While SI has produced a lot of content that I find interesting and enjoyable, it has not produced what I consider evidence of superior general rationality or of its suitability for the tasks it has set for itself. I see no qualifications or achievements that specifically seem to indicate that SI staff are well-suited to the challenge of understanding the key AI-related issues and/or coordinating the construction of an FAI. And I see specific reasons to be pessimistic about its suitability and general competence.
When estimating the expected value of an endeavor, it is natural to have an implicit “survivorship bias”—to use organizations whose accomplishments one is familiar with (which tend to be relatively effective organizations) as a reference class. Because of this, I would be extremely wary of investing in an organization with apparently poor general competence/suitability to its tasks, even if I bought fully into its mission (which I do not) and saw no other groups working on a comparable mission.
Harsh but true. Luke seems ready to take all this to heart, and make improvements to address each of these points.
Yes, especially if by “ready to take all this to heart” you mean “already agreed with most of the stuff on organizational problems before Holden wrote the post.” :)
That was my half my initial reaction as well,the other half:
The critique mostly consists of points that are pretty persistently bubbling beneath the surface around here, and get brought up quite a bit. Don’t most people regard this as a great summary of their current views, rather than persuasive in any way? In fact, the only effect I suspect this had on most people’s thinking was to increase their willingness to listen to Karnofsky in the future if he should change his mind. Since the post is basically directed at LessWrongians as an audience, I find all of that a bit suspicious (not in the sense that he’s doing this deliberately).
Also, the only part of the post that interested me was this one (about the SI as an organization); the other stuff seemed kinda minor—praising with faint damns, relative to true outsiders, and so perhaps slightly misleading to LessWrongians.
Reading this (at least a year old, I believe) makes me devalue current protestations:
http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc
I just assume people are pretty good at manipulating my opinion, and honestly, that often seems more the focus in the “academic outreach”. People who think about signalling (outside of economics, evolution, etc) are usually signalling bad stuff. Paying 20K or whatever to have someone write a review of your theory is also really really interesting, as apparently SI is doing (it’s on the balance sheet somewhere for that “commissioned” review; forget the exact amount). Working on a dozen papers on which you might only have 5% involvement (again: or whatever) is also really really interesting. I can’t evaluate SI, but they smell totally unlike scientists and quite like philosophers. Which is probably true and only problematic inasmuch as EY thinks other philosophy is mostly bunk. The closest thing to actually performed science on LW I’ve seen was that bit about rates of evolution, which was rather scatterbrained. If anyone can point me to some science, I’d be grateful. The old joke about Comp Sci (neither about Comp nor Sci) need not apply.
Apart from the value of having a smart, sincere person who likes and has seriously tried to appreciate you give you their opinion of you … Holden’s post directly addresses “why the hell should people give money to you?” Particularly as his answer—as a staff member of a charity directory—is “to support your goals, they should not give money to you.” That’s about as harsh an answer as anyone could give a charity: “you are a net negative.”
My small experience is on the fringes of Wikimedia. We get money mostly in the form of lots of small donations from readers. We have a few large donations (and we are very grateful indeed for them!) but we actively look for more small donations (a) to make ourselves less susceptible to the influence of large donors (b) to recruit co-conspirators: if people donate even a small amount, they feel like part of the team, and that’s worth quite a lot to us.
The thing is that Wikimedia has never been very good at playing the game. We run this website and we run programmes associated with it. Getting money out of people has been a matter of shoving a banner up. We do A/B testing on the banners! But if we wanted to get rabid about money, there’s a lot more we could be doing. (At possible expense of the actual mission.)
SIAI doesn’t have the same wide reader base to get donations from. But the goal of a charity that cares about its objectives should be independence. I wonder how far they can go in this direction: to be able to say “well, we don’t care what you say about us, us and our readers are enough.” I wonder how far the CMR will go.
Sorry, I’m not quite understanding your first paragraph. The subsequent piece, I agree completely with and think applies to a lot SI activities in principle (even if not looking for small donors). The same idea could roughly guide their outlook to “academic outreach”, except it’s a donation of time rather than money. For example, gaining credibility from a few big names is probably a bad idea, as is trying to play the game of seeking credibility.
On the first paragaph, apologies for repeating, but just clarifying: I’m assuming that everyone already should know that even if you’re sympathetic to SI goals, it’s a bad idea to donate to them. Maybe it was a useful article for the SI to better understand why people might feel that way. I’m just saying I don’t think it was strictly speaking “persuasive” to anyone. Except, I was initially somewhat persuaded that Karnofsky is worth listening to in evaluating SI. I’m just claiming, I guess, that I was way more persuaded that it was worth listening to Karnofsky on this topic than I should have been since I think everything he says is too obvious to imply shared values with me. So, in a few years, if he changes his mind on SI, I’ve now decided that I won’t weight that as very important in my own evaluation. I don’t mean that as a criticism of Karnofsky (his write-up was obviously fantastic). I’m just explicating my own thought process.
I felt it was very persuasive.
Just as a data point, I was rather greatly persuaded by Karnofsky’s argument here. As someone who reads LW more often for the cognitive science/philosophy stuff and not so much for the FAI/Singularity stuff, I did not have a very coherent opinion of the SI, particularly one that incorporated objective critiques (such as Karnofsky’s).
Furthermore, I certainly did not, as you assert, know that it is a bad idea to donate to the Singularity Institute. In fact, I had often heard the opposite here.
Thanks. That’s very interesting to me, even as an anecdote. I’ve heard the opposite here too; that’s why I made it a normative statement (“everyone already should know”). Between the missing money and the publication record, I can’t imagine what would make SI look worth investing in to me. Yes, that would sometimes lead you astray. But even posts like, oh: http://lesswrong.com/lw/43m/optimal_employment/?sort=top
are pretty much the norm around here (I picked that since Luke helped write it). Basically, an insufficient attempt to engage with the conventional wisdom.
How much should you like this place just because they’re hardliners on issues you believe in? (generic you). There are lots of compatibilists, materialists, consequentialists, MWIers, or whatever in the world. Less Wrong seems unusual in being rather hardline on these issues, but that’s usually more a sign that people have turned it into a social issue than a matter of intellectual conviction (or better, competence). Anyway, probably I’ve become inappropriately off topic for this page; I’m just rambling. To say at least something on topic: A few months back there was an issue of Nature talking about philanthropy in science (cover article and a few other pieces as I recall); easily searchable I’m sure, and may have some relevance (both as SI tries to get money or “commission” pieces).
By the way, was the 2009 theft resolved successfully, preferably in a “money back in SI” way?
Luke mentioned this in his long list of recent improvements made by SI:
Sounds like the criminal case didn’t work out :(