SIAI Fundraising
Please refer to the updated documented here: http://lesswrong.com/lw/5il/siai_an_examination/
This version is an old draft.
NOTE: Analysis here will be updated as people point out errors! I’ve tried to be accurate, but this is my first time looking at these (somewhat hairy) non-profit tax documents. Errors will be corrected as soon as I know of them! Please double check and criticize this work that it might improve.
Document History:
4/25/2011 - Initial post.
4/25/2011 - Corrected Yudkowsky compensation data.
4/26/2011 - Added expanded data from 2002 − 2009 in Overview, Revenue, and Expenses
4/27/2011 - Added expanded data to Officer Compensation & Big Donors
Todo:
Create a detailed program services analysis that examines the SIAI’s allocation of funds to the Summit, etc.
Create an index of organizational milestones.
Disclaimer:
I am not affiliated with the SIAI.
I have not donated to the SIAI prior to writing this.
Acting on gwern’s suggestion in his Girl Scout Cookie analysis, here is a first pass at looking at SIAI funding, suggestions for a funding task-force, etc.
The SIAI’s Form 990′s are available at GuideStar and Foundation Center. You must register in order to access the files at GuideStar.
2002 (Form 990-EZ)
2003 (Form 990-EZ)
2004 (Form 990-EZ)
2005 (Form 990)
2006 (Form 990)
2007 (Form 990)
2008 (Form 990-EZ)
2009 (Form 990)
Overview
Filing Error 1? - There appears to be a minor typo to the effect of $4.86 in the end of year balance for the 2004 document. This money is accounted for, the results just aren’t entered correctly. * Someone else please verify.
Filing Error 2? - The 2005 document appears to have accounted for expenses incorrectly, resulting in an excess $70,179.00 reported in the end of the year asset balance. This money is accounted for under 2005 Part III. It is merely not correctly deducted from the year end asset balance. * Someone else please verify.
Theft? - The organization reported $118,803.00 in theft in 2009 resulting in a year end asset balance lower than expected. The SIAI is currently pursuing legal restitution.
Analysis:
The SIAI asset sheet grew until 2008 when expenditures outpaced revenue.
Assets would have resumed growth into 2009, except for theft (see above.)
Current asset balance is insufficient to sustain a year of operation at existing rate of expenditure. Significant loss of revenue would result in a shrinkage of services. Such a loss of revenue may be unlikely, but a reasonable goal would be to build up a year’s reserves.
Revenue
Analysis:
Income from public support (donations) has grown steadily with a significant regular increase starting in 2006.
This regular increase is a result of significant new contributions from big donors.
As an example, public support in 2007 is largely composed of significant contributions from Peter Thiel ($125k), Brian Cartmell ($75k), and Robert F. Zahra Jr ($123k) for $323k total in large scale individual contributions (break down below).
In 2007 the SIAI started receiving income from program services. Currently all “Program Service” revenue is from operation of the Singularity Summit.
The Singularity Summit revenue continues to grow. The Summit is roughly breaking even. If this continues, the Summit will be able to compensate speakers better, improve the quality of proceedings, or net some of the revenue for other goals.
Expenses
Analysis:
This chart can use improvement. It’s categorized rather clinically. Would be more useful to break down the Contracts and Program categories (this may not be possible from the Form 990s).
The grants in 2002, 2003, and 2004 were paid to Eliezer Yudkowsky for work “of unique relevance and value to the Singularity, to Artificial Intelligence, or to Friendly Artificial Intelligence.”
Program expenses include operating the Singularity Summit, Visiting Fellows Program, etc.
The Other category includes lots of administrative costs that are somewhat itemized.
Overall, expenses have grown at pace with revenue.
Salaries have steadily declined. (More detail below.)
Program service expenses have increased, but this is expected as the Singularity Summit has grown and new services like the Visiting Fellows Program have been introduced.
Big Donors
Analysis
Contributions in the 2010 column are derived from http://singinst.org/donors. Contributions of less than $5,000 are excluded for the sake of brevity.
Contributions in 2003 − 2009 are from official filings. The 2009 Form 990 discloses excess donations for 2006 − 2009. This is not an exhaustive list of contributions, just what could be found in the Form 990s available online.
The 2006 donation from Peter Thiel is sourced from a discussion with the SIAI.
Peter Thiel and a few other big donors compose the bulk of the organization’s revenue.
Should any major donor be lost, the SIAI would have to reduce services. It would be good to see a broader base of donations moving forward.
Note, however, that over the past five years the base of donations HAS been improving. We don’t have the 2010 Form 990 yet, but just based on data from MA and SingInst.com things are looking a lot better.
Officer Compensation
This graph needs further work to reflect the duration of officers’ service.
In 2002 to 2005 Eliezer Yudkowsky received compensation in the form of grants from the SIAI for AI research.
Starting in 2006 all compensation for key officers is reported as salaried instead of in the form of grants.
SIAI officer compensation has decreased in recent years.
Eliezer’s base compensation as salary increased 20% in 2008 and then 7.8% in 2009.
It seems reasonable to compare Eliezer’s salary with that of professional software developers. Eliezer would be able to make a fair amount more working in private industry as a software developer.
Both Yudkowsky and Vassar report working 60 hours a work week.
It isn’t indicated how the SIAI conducts performance reviews and salary adjustment evaluations.
Prior to doing this investigation, I had some expectation that the Singularity Summit was a money losing operation. I had an expectation that Eliezer probably made around $70k (programmer money discounted for being paid by a non-profit). I figured the SIAI had a broader donor base. I was off base on all counts.* I am not currently an SIAI supporter. My findings have greatly increased the probability that I will donate in the future.
Overall, the allocation of funds strikes me as highly efficient. I don’t know exactly how much the SIAI is spending on food and fancy tablecloths at the Singularity Summit, but I don’t think I care: it’s growing and it’s nearly breaking even. An attendee can have a very confident expectation that their fee covers their cost to the organization. If you go and contribute you add pure value by your attendance.
At the same time, the organization has been able to expand services without draining the coffers. A donor can hold a strong expectation that the bulk of their donation will go toward actual work in the form of salaries for working personnel or events like the Visiting Fellows Program.
Eliezer’s compensation is slightly more than I thought. I’m not sure what upper bound I would have balked at or would balk at. I do have some concern about the cost of recruiting additional Research Fellows. The cost of additional RFs has to be weighed against new programs like Visiting Fellows.
The organization appears to be managing its cash reserves well. It would be good to see the SIAI build up some asset reserves so that it could operate comfortably in years were public support dips or so that it could take advantage of unexpected opportunities.
The organization has a heavy reliance on major donor support. I would expect the 2010 filing to reveal a broadening of revenue and continued expansion of services, but I do not expect the organization to have become independent of big donor support. Things are much improved from 2006 and without the initial support from Peter Thiel the SIAI would not be able to provide the services it has, but it would still be good to see the SIAI operating capacity be larger than any one donor’s annual contribution. It is important for Less Wrong to begin a discussion of broadening SIAI revenue sources.
Where to Start?
There is low hanging fruit to be found. The SIAI’s annual revenue is well within the range of our ability to effect significant impact. These suggestions aren’t all equal in their promise, they are just things that come to my mind.
Grant Writing. I don’t know a lot about it. Presumably a Less Wrong task force could investigate likely candidate grants, research proper grant writing methodology, and then apply for the grants. Academic members of Less Wrong who have applied for research grants would already have expertise in this area.
Software. There are a lot of programmers on Less Wrong. A task force could develop an application and donate the revenue to the SIAI.
Encouraging Donations. Expanding the base of donations is valuable. The SIAI is heavily dependent on donations from Peter Thiel. A task force could focus on methods of encouraging donations from new supporters big and small.
Prize Winning. There are prizes out there to be won. A Less Wrong task force could identify a prize and then coordinate a group to work towards winning it.
Crowd Source Utilization. There are sites devoted to crowd sourced funding for projects. A task force could conceive of a project with the potential to generate more revenue than required to build it. Risk could be reduced through the use of crowd sourcing. Excess revenue donated to the SIAI. (Projects don’t have to be software, they could be fabricating an interesting device, piece of art, or music.)
General Fund Raising Research. There are a lot of charities in the world. Presumably there are documented methods for growing them. A task force could attack this material and identify low hanging fruit or synthesize new techniques.
- 2 May 2011 13:40 UTC; 11 points) 's comment on SIAI—An Examination by (
- 19 Apr 2011 21:14 UTC; 10 points) 's comment on So you say you’re an altruist... by (
Okay, that didn’t happen. I got my standard salary in 2009, no more. I think my standard salary must’ve been put down as payment for the Sequences… or something; I don’t know. But I didn’t get anything but my standard salary in 2009 and $84K sounds right for the total of that salary.
Fixed.
The section that led me to my error was 2009 III 4c. The amount listed as expenses is $83,934 where your salary is listed in 2009 VII Ad as $95,550. The text in III 4c says:
“This year Eliezer Yudkowsky finished his posting sequences on Less Wrong [...] Now Yudkowsky is putting together his blog posts into a book on rationality. [...]”
This is listed next to two other service accomplishments (the Summit and Visiting Fellows).
If I had totaled the program accomplishments section I would have seen that I was counting some money twice (and also noticed that the total in this field doesn’t feed back into the main sheet’s results).
Please accept my apology for the confusion.
Hm. $95K still sounds too high, but if I recall correctly, owing to a screwup in our payments processor at that time, my salary for the month of January 2010 was counted into the 2009 tax year instead of 2010.
No apology is required; you wrote without malice.
Am I the only one who is now curious how Eliezer spends the bulk of his disposable income? Is it to save for retirement in case the Singularity either doesn’t occur, or occurs in a Hansonian way, despite his best efforts?
Large air-conditioned living space, healthy food, both for 2 people (myself and girlfriend). My salary is at rough equilibrium with my spending; I am not saving for retirement. The Bay Area is, generally speaking, expensive.
Wow, my intuition was rather off on what $95,550 in compensation means for someone living in the Bay Area. Here’s some actual calculations for others who are similarly curious. (There are apparently quite a few of us, judging from the votes on my comment.)
Assuming salary is 75% of compensation, that comes to $71662. $4557 in CA state tax. $11,666 federal income tax. $5,482 FICA tax. So $49957 after tax income.
For comparison, my wife and I (both very frugal) spend about $35000 (excluding taxes and savings) per year. Redwood City’s rent is apparently double the rent in our city, which perfectly accounts for the additional $15000.
Eliezer, you might want to consider getting married, in which case you can file your taxes jointly, and save about 6 thousand dollars per year (assuming your girlfriend has negligible income).
You’re not saving for retirement because you think that, one way or another, it’s unlikely you’ll be collecting that money?
Is the Singularity Institute supporting her through your salary?
I hope you’re not too put out by the rudeness of this question. I’ve decided that I’m allowed to ask because I’m a (small) donor. I doubt your answer will jeopardize my future donations, whatever it is, but I do have preferences about this.
(Also, it’s very good to hear that you’re taking health seriously! Not that I expected otherwise.)
My salary is my own, to do with as I wish. I’m not put out by the rudeness, per se, but I will not entertain further questions along these lines—it is not something on which I’m interested in having other people vote.
Quiz: Who said it?
Who said:
Context please.
By a startling coincidence, V_V’s editing seems deliberately deceptive:
Question 5, if you don’t want to paste into Find. Ph33r my drunken library school graduate skillz!
I didn’t edit it myself, I pasted it from there
Anyway, the editing doesn’t seem to be particularly deceptive: apparently there is a special clause for Wall Street bankers that allows them to trade the bright future of our intergalactic (sic) descendants for their immediate personal luxuries.
… are you implying that Eliezer is wrong to be working to save the world, because he could earn significantly more money and pay others to do better? How much do you think his current “crunch time” efforts would cost?
No. Yudkowsky is paid by the SI, hence he could just donate to the SI just by accepting a lower salary.
He claims that any single dollar of extra funding the SI has could make the difference between an exceptionally positive scenario (Frendly superhuman AI, intergalactic civlization, immortality, etc) and an exceptionally negative one (evil robots who kill us all). He asks other people to forfeit a substantial part of their income to secure this positive scenario and avert the negative one. He claims to be working to literaly save the world, therefore, to be working on his very own survival.
And then, he draws from the SI resources that could be used to hire additional staff and do more research, just to support his lifestyle of relative luxury.
He could live in a smaller house, he could move himself and the SI to a less expensive area (the Silicon Valley is one of the most expensive areas in the world, and there doesn’t seem to be a compelling reason for the SI to be located there). If he is honest about his claimed beliefs, if he “confronted them, rationally, full-on”, how could he be possibly trading any part of the bright future of ours (and his) intergalactic descendants, how could he be trading the chance of his own survival, for a nice house in an expensive neighborhood?
I’m not suggesting he should move to a slum in Calcutta and live on a subsistence wage, but certainly he doesn’t seem to be willing to make any sacrifice for what he claims to believe, expecially when he asks other people to make such sacrifices.
Of course, I’m sure he can come with a thousand rationalizations for that behavior. He could say that a lifestyle any less luxurious than his current one would negatively affect the productivity of his so much important work. I won’t buy it, but everyone is entitled to their opinion.
There are compelling reasons to be there: it is the epicenter of the global tech world. You will not find a place more interested in these topics, with more potential donors, critics, potential employees, etc.
This is the same reasoning for why the WikiMedia Foundation moved from St Petersburg, Florida to San Francisco back in 2007 or 2008 or so: that they would be able to recruit more talent and access more big donors.
I was a little disgusted at the high cost of living since I thought the WMF’s role ought to be basically keeping the servers running and it was a bad idea to go after more ambitious projects and the big donors to pay for those projects. But sure enough, a year or two later, the multi-million dollar donations and grants began to come in. Or notice that Givewell is still located in NYC, even after spending a while working out of India with a much lower cost of living (Mumbai, not your Calcutta, but close enough).
I still think the projects themselves are largely wasted, and that the WMF should have been obsessed with reducing editor attrition & deletionism rather than SWPL projects like African DVDs and so I stopped donating long ago; but the move itself performed as advertised.
AFAIK the SI doens’t do software development or direct computer science research. Other than operating Less Wrong, their main outputs seem to be philosophical essays and some philosophical pubblications, plus the annual Singularity Summits (which makes sense to do in the Silicon Valley, but don’t have to be physically close to the SI main location). A cursory look on the SI team pages suggests that most of the staff are not CompSci professionals, and many of them didn’t get their education or did research at Stanford or other Silicon Valley colleges.
From the donors point of view, IIUC, most of the money donated to the SI comes from very few big donors, Peter Thiel in particular donates much more than everybody else (maybe more than everybody else combined?). I suppose that such donors would continue to support the SI even if it was relocated.
Even assuming that there are benefits from staying in the Silicon Valley that outweight the costs, the point stands that Yudkowsky could accept a lower salary while still staying well above subsistence level.
The audience and donors are there, which is enough, but your point about the people is not strong: most of the people in Silicon Valley was not taught at Stanford, does that mean they are wasting their time there? Of course not, it just points out how California sucks in strange people and techies (and both) from around the world eg. my elder sister was raised and went to college on the east coast, but guess where she’s working now? Silicon Valley.
You suppose that because it is convenient for your claim that being in Silicon Valley is wasteful, not because it is true. The widespread absence of telecommuting in corporations, the worldwide emphasis on clustering into cities so you can be physically close to everyone, how donors in every charity like to physically meet principles and “look them in the eyes”, the success of LW meetups—all these point to presence being better than absence.
SI would never have gotten Thiel’s support, I suspect, if it had remained in Atlanta. Having gotten his support, it will not keep it by moving out of Silicon Valley. Having moved out of Silicon Valley, it will find it hard to find any more donors.
What, like Thiel is guaranteed to never drop support? Even in such an absurd situation, why would you risk it by ignoring all other big donors? And what if you wanted to grow? If SI were to leave Silicon Valley to save some money on salaries, it would be a major long-term strategic mistake which would justify everything critics might say about SI being incompetent in choosing to be penny-wise and pound-foolish.
Dunno, but that wasn’t the point I was addressing.
Yes, of course the Silicon Valley attracts CompSci professionals from all over the world, but the SI doesn’t employ them. Strange people you say? I’ve never been to San Francisco, but I’ve heard that it’s considered home to weirdos of every possible kind. Maybe that’s the people the SI panders to?
Well, I dunno. It’s not like Peter Thiel doesn’t know how to use the Internet or can’t afford flying. Facebook, for instance, was located in Massachusetts and only moved to the Silicon Valley in 2011.
All people who like SI are by definition out of the mainstream, but not all people out of the mainstream are whom SI ‘panders’ to.
And yet...
How wasteful of them. Don’t they know they can just use the Internet to do this thing called ‘social networking’? There’s no reason for them to be in Silicon Valley. Hopefully their shareholders will do something about that.
Oh, right. That makes sense, I guess. Of course, as you say, he may have reasons he hasn’t shared for this lifestyle. Low prior probability of them being good reasons though.
I believe he saves it to ensure that FAI development would continue to occur in the event of a collapse of SI. He doesn’t exactly live an ostentatious lifestyle.
I like seeing these numbers. Transparency + people organizing the information is great. Seeing this presented here (on Less Wrong) where I am likely to see it makes me more likely to donate. Thanks!
Ditto!
I’m surprised that no-one’s mentioned this—it’s hard to imagine how someone can steal that much money. Can someone at SIAI tell us whether they’re allowed to talk about what happened; and if you can’t right now, do you have any idea when you might be able to?
The theft must have been discovered to be more extensive than thought, because one early report says
Which is significantly less than $120k.
So in December 2009, Alicia Isaac was arrested for stealing from SIAI. One year later, she was hired by the Lifeboat Foundation, where she apparently still works.. On the finance board, no less!
Was she vindicated in 2010, or is the Lifeboat Foundation just stupid?
Luke in his questions page for when he became director said the case was ongoing and scheduled for trial, IIRC, and that was either 2011 or 2012.
I’m not sure stupid is the right adjective for the things I wonder about Lifeboat...
Based on the links, I don’t think she actually works there any more than all the other advisory board members do. She isn’t listed on the staff page.
Looking in the Internet Archive, their staff page never lists anything with the keyword ‘finance’ (for the past 2-3 years), so I’m not sure that’s a strong argument from silence.
The “finance board” that she’s a part of is one of LF’s “advisory boards”, which look like they total 1000+ people; see the first link in the grandparent. These people aren’t employees even though they have bios on the site. My impression is they just get listed on the site and added to a mailing list.
Yep. They seem to just look for people who have some connection, however tenuous, to what they do, and then ask nicely if you’d like to be on the board. Then they email you occasionally and maintain a wee profile with links to your stuff. It’s pretty okay.
I think you may be right. I just took a look at the 2011 form 990 (2012 is not out), which is where I’d expect to first see her mentioned if she was handling books for them, but the form is listed as being prepared by the president Eric Klien and Isaac is not mentioned anywhere in it I can see.
Michael Vassar sent out an email with more information back in Dec 2009 (shortly after they discovered the theft?). I’m not sure if it was just to donors or also included newsletter subscribers. It basically said, ‘we trusted this person and they took advantage of that trust.’ It also states that since legal action is still pending, they have to “limit what [they] say”, but that you can send further inquiries to Michael.
Thanks. I guess the followup questions are:
Is the legal action still pending, or can the situation be talked about openly now?
Has SIAI been able to recover the money?
Was it a mistake to trust a contractor with access to >$100k of funds? Do they still do that?
My understanding is that the case is ongoing in criminal court, at least as of a few weeks ago, and that the money has largely not yet been recovered. As far as I know, only that one contractor had the relevant financial access, which was required for the job, but obviously the financial controls on that access were not sufficient. I think that currently only the President and COO have the relevant access to the accounts (though others, including the majority-donor board, have limited access to monitor the accounts).
Seeing SIAIs financials has made me more likely to donate to SIAI.
Does anyone have links to writing on what SIAI would do with increased funding? For example, “Allison Hu is a brilliant young Y and has come up with good ideas a,b,c. We would like to hire her, but we don’t have the funding to do so”. I’d like to see arguments about SIAIs marginal spending.
Also. Brandon! You should have talked about this at the meetup so we could all say what a great idea it was!
For a little more information, there’s also this donor list, which consists of my best effort at finding $1K+ donors over the last few years:
http://singinst.org/donors
If I missed anyone who donated and wanted to be on the list, please contact me at anissimov@intelligence.org. Making this list involved going over thousands of Paypal records over the past few years. (Necessary because all Summit payments are also intertwined with actual donations in the records, making it necessary to mentally filter out all payments that are obviously for the Summit.)
Zvi Mowshowitz! Wow color me surprised. Zvi is a retired professional magic player. I used to read his articles and follow his play. Small world.
http://lesswrong.com/user/Zvi
Come now, it’s probably a different Zvi Mowshowitz.
Since the Zvi who posts here is indeed the same Zvi Mowshowitz he speaks of, it is close to certain that the one who donated the money is as well.
Zvi is one of the leaders of the New York Less Wrong community, actually. Munchkinism generalizes.
I played Magic against Zvi using one of his own decks, and the deck won—I was there, but I wasn’t involved.
It is close to certain that ArisKatsaris was joking (imitating the format of an exchange where one person says something about, say, a political candidate named John Smith, and the other person says “Oh, John Smith, isn’t that the guy who won the Nobel Prize in Chemistry a few years ago?” and the first person says “I dunno, probably a different John Smith”, while subverting it by (seemingly very confidently) applying it to a comparatively very uncommon name).
...wow, I entirely failed to pick up on that. >.<
I was just joking, as ata explained. Sorry for the confusion. :-)
Birthday paradox? Given a set of donor-list-readers and another set of donors, there’s a better chance than one would expect that there’s a commonality. :)
Did he retire since last year?
That doesn’t list the 1000 USD that I gave.
Can you email me your name so I can check our records, confirm the donation, and list it? Did you check the box that said “list my name”?
Ignore previous; I structured the contribution so that it would go under another human’s name or be anonymous. Still, be it known that I gave 1000 USD, just ask User:Kevin or “Michael Vassar”.
Employee compensation generally includes more than just salary- there’s the cost of the employers share of social security, health insurance and any other benefits. If these are included in the figures listed, then the employees salaries are considerably less. If the Singularity Institute isn’t providing health insurance, than buying individual policies is a major expense for the employees. The Bay Area is also one of the most expensive places to live in the U.S.
If donation money is used to buy worktime (which is good and well), why not move to Thailand and save the world from there? :-)
Sounds great from a weather perspective :)
Alas, folk need to see collaborators, arrange Singularity Summits, interact with donors, board members, and media in the US, Constant travel to and fro would be an imperfect substitute, and flight costs (including time and jet lag) would claw back the cost-of-living gains and more.
My suggestion wasn’t completely serious, but thanks for the answer anyway!
I’m given to understand this is why the Visiting Fellows program is being temporarily moved to Bali.
Last I heard, this is not actually the case.
The rule of thumb I’ve heard is that an employee’s cost to their employer is between two and three times their salary. Even if the employer is not paying benefits, they still have to carry worker’s comp insurance, for example, as well as administrative overhead on managing payroll, etc.
Note that there’s also data for 2002 and 2003, though that time period may not be relevant to much now.
I’m also going to see if I can get a copy of the 2010 filing.
Edit: The 2002 and on data is now largely incorporated. Still working on a few bits. Don’t have the 2010 data, but the SIAI hasn’t necessarily filed it yet.
Kudos and karma for putting so much work into summarizing all this.
I think this should be on the front page. Brandon, you should also mention whether you are affiliated with SIAI and whether you’ve donated to SIAI before.
Once I finish the todo at the top and get independent checking on a few things I’m not clear on, I can post it to the main section. I don’t think there’s value in pushing it to a wider audience before it’s ready.
From the OP: “I am not currently an SIAI supporter. My findings have greatly increased the probability that I will donate in the future. ”
Oops, missed that. Thanks.
Why is this post deleted? Did something go wrong when transferring it to the main section?
Current main section: http://lesswrong.com/lw/5il/siai_an_examination/
Congratulation on this writeup; it’s pretty good. (Nothing in it strikes me as erroneous from my previous quick readings of the filings except that Sequences thing.) I hadn’t actually expected anyone to take my suggestion, so this is a pleasant surprise.
Personally I think this is pretty shocking and the worst thing I’ve ever learned about SIAI.
(And since it’s relevant when saying these kinds of things, I’ve donated to SIAI before.)
EDIT: False alarm, apparently there was no sequences bonus
Should’ve noticed your own confusion, that didn’t actually happen.
http://lesswrong.com/r/discussion/lw/5fo/siai_fundraising/40v2
(Base compensation rates of increase sound about right, though.)
FWIW $95.5K doesn’t seem excessive to me.
It seems a little on the high side to me—and $180,000 (when you combine his salary and the Sequences money) in 2009 is ludicrous. I mean, if that’s what people want to spend their money on, fair enough, but it’s a big chunk of the total funds raised by SIAI that year. So when people talk about the marginal utility of donating a dollar to the SIAI, an equally valid way to phrase it might be “the marginal utility of increasing the salary of someone who earned $180,000 in 2009 by 28 cents”.
But I’m not wanting to dissuade anyone from spending their money if that’s what they want to spend it on...
Assuming marginal money is allocated proportional to existing spending, which is surely not the case. (Yes, the $180,000 figure would be unreasonable if true.)
Shockingly high or shockingly low?
Where did you think the money was going?!
They thought it wasn’t going to paying me $180K. Correctly. +1 epistemic point to everyone who expressed surprise at this nonfact, −1 point for hindsight bias to anyone who claimed not to be shocked by it.
Fair enough. I assumed that most of the money would be going on salary, so if an organisation with a small staff had a large income, it’d be paying high salaries. It’s one reason (of many) I’ve never donated. So I’ve just made a $10 donation, partly to punish myself for my own biases, and partly to make some restitution for acting on those biases in a way which might have seemed insulting.
More info and discussion on this please? This sounds like something that I actually, just maybe, could make myself useful by, depending on what it means.
While I’m to unreliable and responisbility-shy to dare take lead of such a project, I might be able to come up with and bootstrap some art project if there are other people interested and dedicated that can complement my weaknesses and take over if I lose interest.
I have no idea what the scheme for somehting like this resulting in money is thou, or if the LW related stuff I’m planing could be reworked for doing that as well as the possible awareness rising and mostly being for fun it’s current planed purpose is. If it ever becomes anything other than vaporware that is.
This doesn’t list any payment to me for purchase and safekeeping of paperclips.
See http://lesswrong.com/r/discussion/lw/5fo/siai_fundraising/40xg
OK.
Can everyone see all of the images? I received a report that some appeared broken.
All of the images are blocked by my work internet filter. I can see them all at home.
Perhaps everyone who gives over $100 to SIAI could get a star by their name on LessWrong.
I am uncomfortable with making the link between SIAI and LW so official (even though they sponsor LW).
Thinking about it more, I am uncomfortable with linking LW social status so officially with support for SIAI.
I’m embarrassed to say this’d probably make me a fair bit more likely to donate.
I agreement-upvoted this when I thought it read the exact opposite; I would be less likely to donate if this occured. Selling CEV shares is all very well, but that idea sounds crass and diversive.
I agree, hence why I’m embarrassed about it.
Yay, LessWrong Gold accounts! :-)
reference
I’m not happy about the justifying the high payouts to EY as “that’s what a programmer might make”. Instead, put him (and any other SIAI full-time employees, possibly just Michael Vassar) on half pay (and half time), and suggest that he work in the “real world” (something not SIAI/futurism related) the rest of the time. This means that his presumed skills are tested and exercised with actual short-term tasks, and also gives an approximate market price for his skills.
Currently, his market-equivalence to a programmer is decoupled from reality.
This is a great idea, if SIAI put signalling what moral people they are over actually bringing about the best outcome.
Can you elaborate? I don’t understand my proposal as related to signaling at all; it’s about measuring EY’s (and others’) effectiveness, rather than taking it for granted. Yes, it’s costly in the event it’s unnecessary, but corruption/ineffectiveness/selfishness (where EY and Vassar are primarily building a career and a niche for themselves, consciously or unconsciously) is also costly.
Perhaps other employers should also employ everyone half-time so that they get more information about their employees’ market value?
If SIAI were paying Eliezer to be a “generic” programmer, then I suppose they could get a reasonable idea of whether he’s a good one in the way you describe. Or they could just fire him and hire some other guy for the same salary: that’s not a bad way of getting (where SIAI is) a middling-competent programmer for hire.
But it doesn’t seem to me even slightly credible that that’s what they’re paying Eliezer for. They might want him writing AI software—or not, since he’s well known to think that writing an AI system is immensely dangerous—in which case sending him out to work half-time for some random software company isn’t going to give much idea of how good he is at that. Or they might want him Thinking Deep Thoughts about rationality and friendly AI and machine ethics and so forth, in which case (1) his “market value” would need to be assessed by comparing with professional philosophers and (2) presumably SIAI sees the value of his work in terms of things like reducing existential risk, which the philosophy-professor market is likely to be … not very responsive to.
What sending Eliezer out to work half-time commercially demonstrably won’t do is to measure his “effectiveness” at anything that seems at all likely to be what SIAI thinks it’s worth paying him $100k/year for.
The most likely effects seem to me some combination of: (1) Eliezer spends less time on SIAI stuff and is less useful to SIAI. (2) Eliezer spends all his time on SIAI stuff and gets fired from his other job. (3) Eliezer finds that he can make a lot more money outside SIAI and jumps ship or demands a big pay rise from SIAI. (4) Eliezer decides that an organization that would do something so obviously silly is not fit to (as he sees it) try to decide the fate of the universe, quits SIAI, and goes to do his AI-related work elsewhere.
No combination of these seems like a very good outcome. What’s the possible benefit for SIAI here? That with some (not very large) probability Eliezer turns out not to be a very good programmer, doesn’t get paid very well by the commercial half-time gig, accepts a lower salary from SIAI on the grounds that he obviously isn’t so good at what he does after all, but doesn’t simultaneously get so demoralized as to reduce his effectiveness at what he does for SIAI? Well, I suppose it’s barely possible, but it doesn’t seem like something worth aiming for.
What am I missing here? What halfway plausible way is there for this to work out well?
I think it’s entirely possible for people within corporations to build cozy empires and argue that they should be paid well, and for those same people to in fact be incompetent at value creation—that is, they could be zero-sum internal-politics specialists. The corporation would benefit from enforcing a policy against this sort of “employee lock-in”, just like corporations now have policies against “supplier lock-in”.
This would entail, among other things, everyone within the corporation having a job description that is sufficiently generic that other people also fit the same job description, and for outside auditors to regularly evaluate whether the salaries being paid for a given job description are comparable to industry standards.
I haven’t heard of anyone striving to prevent “employee lock-in” (though that might just be the wrong words) - but people certainly do strive for those related policies.
There are lots of potential upsides: 1. At the prospect of potentially being tested, EY shapes up and starts producing. 2. Due to real-world experience, EY’s ideas are pushed along faster and more accurately. 3. SIAI discovers that EY is “just a guy” and reorganizes, in the process jumping out of its recurrent circling of the cult attractor. 4. Due to EY’s stellar performance in the real world, other people start following the “work half time and do rationality and existential risk reduction half time” lifestyle.
In general, my understanding of SIAI’s proposed financial model is “other people work in the real world, and send money without strings to SIAI, in exchange for infrequent documentation regarding SIAI’s existential risk reduction efforts”. I think that model is unsustainable, because the organization could switch to becoming simply about sustaining and growing itself.
SIAI firing Eliezer would be like Nirvana firing Kurt Cobain. Most of the money and public attention will follow Eliezer, not stay with SIAI.
You’re not alone in wanting Eliezer to start publishing new results already. But there’s also the problem that he likes secrecy way too much. Alexandros Marinos once compared his attitude to staying childless: every childless person came from an unbroken line of people who reproduced (=published their research), and couldn’t exist otherwise.
For example, our decision-theory-workshop group is pretty much doing its own thing now. I believe it diverged from Eliezer’s ideas a while ago, when we started thinking about UDT-ish theorem provers instead of TDT-ish causal graph thingies. I don’t miss Eliezer’s guidance, but I sure miss his input—it could be very valuable for the topics that interest us. But our discussions are open, so I guess it’s a no go.
This is something I’ve never really understood. I can understand wanting to keep any moves directly towards creating an AI quiet—if you create 99% of an AI and someone else does the other 1%, goodbye world. It may not be optimal, but it’s a comprehensible position. But the work on decision theory is presumably geared towards codifying Friendliness in such a way that an AI could be ‘guaranteed Friendly’. That seems like the kind of thing that would be aided by having many eyeballs looking at it, while being useless for anyone who wanted to put together a cobbled-together quick-results AI.
Eliezer stated his reasons here:
So in a nutshell, he thinks solving decision theory will make building unfriendly AIs much easier. This doesn’t sound right to me because we already have idealized models like Solomonoff induction or AIXI, and they don’t help much with building real-world approximations to these ideals, so an idealized perfect solution to decision theory isn’t likely to help much either. But maybe he has some insight that I don’t.
I think Eliezer must have changed his mind after writing those words, because his TDT book was written for public consumption all along. (He gave two reasons for not publishing it sooner: he wanted to see if a university would offer him a PhD based on it, and he was using DT as a problem to test potential FAI researchers.) I guess his current lack of participation in our DT mailing list is probably due to some combination of being busy with his books and lack of significant new insights.
I think TDT is different from the “reflective decision systems” he was talking about, which sounds like it refers to a theory specifically of self-modifying agents.
That’s the first time I noticed the pun. Good one. I want a tshirt.
Ah. I see what he means, if you’re talking about a) just the ‘invariant under reflection’ part and not Friendliness and b) you’re talking about a strictly pragmatic tool. That makes sense.
Starts producing what? 2. What real-world experience, and how will it be relevant to his SIAI work? 3. Yup, that’s possible. See below. 4. Just like they do for all the other people who do stellar work as software developers, you mean?
I think #3 merits a closer look, since indeed it’s one of the few ways that your proposal could have a positive outcome. So let’s postulate, for the sake of argument, that indeed Eliezer’s skills in software development are not particularly impressive and he doesn’t do terribly well in his other half-time job. So … now they fire him? Because he hasn’t performed very well in another job doing different kinds of work from what he’s doing for SIAI? Yeah, that’s a good way to do things.
It would probably be good for SIAI to fire Eliezer if he’s no good at what he’s supposed to be doing for them. But, if indeed he’s no good at that, they won’t find it out by telling him to get a job as a software engineer and seeing what salary he can make.
Yes, it’s bad that SIAI can’t easily document how much progress it’s making with existential risk reduction so that potential donors can decide whether it’s worth supporting. But Eliezer’s market-salary-as-a-generic-programmer is—obviously—not a good measure of how much progress it’s making. Thought experiment: Consider some random big-company CEO who’s being paid millions. Suppose they get bored of CEOing and take a fancy to AI, and suppose they agree to replace Eliezer at SIAI, and even to work for half his salary. In this scenario, should SIAI tell their donors: “Great news, everyone! We’ve made a huge stride towards avoiding AI-related existential risk. We just employed someone whose market salary is measured in the millions of dollars!”?
Yes, it’s bad if SIAI can’t tell whether Eliezer is actually doing work worth the salary they pay him. (My guess, incidentally, is that he is even if his actual AI-related work is of zero value, on PR grounds. But that’s a separate issue.) But measuring something to do with Eliezer that has nothing whatever to do with the value of the work he does for SIAI is not going to solve that problem.
You seem to be optimizing this entire problem for avoiding the mental pain of worrying about whether you’re being cheated. This is the wrong optimization criterion.
I’m working from “organizations are superhumanly intelligent (in some ways) and so we should strive for Friendly organizations, including structural protections against corruption” standpoint.
I hardly think the SIAI, a tiny organisation heavily reliant on a tiny pool of donors, is the most likely organisation to become corrupt. Even when I thought Eliezer was being paid significantly more than he was (see above threads) I wouldn’t call that corruption. Eliezer is doing a job. His salary is largely paid for by a very small number of individuals. As the primary public face of SIAI he is under more scrutiny than anyone else in the organisation. As such, if those people donating don’t think he’s worth the money, he’ll be gone very quickly—and so long as they do, it’s their money to spend.
What’s the good reason to care about whether EY’s salary is calibrated to the market rate, rather than/independent from whether it’s too low or high for this particular situation?
I don’t understand why SI (i.e., its board) shouldn’t employ EY and MV full-time and continually evaluate the effectiveness of their work for it, like any other organization in the world would do.
The fact that both are costly is irrelevant, the point is that one has the potential to be vastly more costly than the other.
Downvoted.
“high payouts”? Good programmers are worth their weight in gold. (As for AI researchers, bad ones are worthless, good-but-not-good-enough ones will simply kill us all, and good-enough ones are literally beyond value...) NYT:
“half pay (and half time)”? I’m just a programmer, not an AI researcher, but I’m confident that this applies equally: it is ridiculously hard to apply concentrated thought to solving a problem when you have to split your focus. As Paul Graham said:
A policy of downvoting posts that you disagree with will, over time, generate a “Unison” culture, driving away / evaporatively cooling dissent.
Though you’re correct about interruptions and sub-day splitting, in my experience it is entirely feasible to split your time X days vs Y days without suffering context-switch overhead—that is, since we’re presumably sleeping, we’re already forced to “boot up” in the morning. I agree it’s harder to coordinate a team some of whom are full time, some are half time, and some are the other half time—but you’d have 40k to make up the lost team productivity.
What do you think downvotes are for? It’s just a number, it’s not an insult.
(Now, if you want to suggest that perhaps I shouldn’t announce a downvote when replying with objections, perhaps I could be convinced of that. I think I’d appreciate a downvote-with-explanation more than a silent downvote.)
The man-month is mythical.
Downvotes are for maintaining the quality of the conversations, not expressing agreement or disagreement. No matter what someone’s opinion is, as long as its incorrectness would not be made evident by reading the sequences, downvotes should only express disapproval of the quality of the argument, not the conclusion. In a case like this, no argument for the opinion that you disapprove of was made. Unless he refused to acknowledge the substance of your disagreement, which was not the case here, no downvote was warranted.
It’s not just that I disagreed with you, it’s that you are wrong in a more objective sense.
How can you tell the two apart?
STL’s downvote was appropriate and he gave far more justification than was needed. I similarly downvoted both your comments here because they both gave prescriptions of behavior to others that was bad advice based on ignorance.
More appropriate reference classes: philosophers, writers, teachers, fundraisers.
I’m not happy with how big Eliezer’s salary is either, but having him work half-time as a programmer to verify the market value of his skills is probably not the best thing to do about it.
What rational reasons do you have?
I can imagine two rational reasons for feeling that someone is overpaid. First and most commonly, if someone is overpaid relative to their productivity. For example, a programmer who writes buggy, poorly designed code and makes 130k for it is clearly overpaid, as is a CEO who makes zillions while driving their company into the ground. This objection could be bluntly phrased as “Eliezer is a hack”—if you think so, say so. I suspect that very few people on LW hold this opinion, especially if, as I said above, they agree that good-enough AI researchers are literally beyond value. (That is, if you subscribe to the basic logic that AI holds the potential to unleash a technological singularity that can either destroy the world or remake it according to our wishes, then EY’s approach is the way to go about doing the latter. Even if you disagree with the particulars, he is obviously onto something, and such insights have value.)
Second, your objection may be “someone who works for a nonprofit shouldn’t be richly compensated”. For example, you could probably go through Newsweek’s Fifteen Highest-Paid Charity CEOs, and pick one where you could say “yeah, that’s a well-run organization, but that CEO is paid way too much—why don’t they voluntarily accept a smaller, but still generous, salary, like a few hundred K?” I don’t believe that the second one applies to EY, because he works in an expensive area. More importantly, the fundamental root of this objection would be “if X accepted less money, the nonprofit would have more resources to spend elsewhere”. That’s pretty obvious when you’re talking about mega-zillion CEO salaries. What about Eliezer’s case? What if he handed back, say, 10k of his salary to SIAI? That’s a significant hit in income for someone whose income matches expenses and whose expenses aren’t unreasonable, and it would be much less significant to SIAI. Finally, EY is already working 60 hours a week for SIAI, and you would want him to donate a chunk of his current salary on top of that? Really?
On the other hand, I can think of an irrational reason to be unhappy with Eliezer’s salary, which I think I’ll be too polite to mention here.
I’m not happy with how big Eliezer’s salary is either, but having him start working half-time as a programmer to verify the market value of his skills is probably not the best thing to do about it.