As Holden said, I generally think that Holden’s objections for SI “are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on ‘tool’ AI),” and we are working hard to fix both categories of issues.
In this comment I would merely like to argue for one small point: that the Singularity Institute is undergoing comprehensive changes — changes which I believe to be improvements that will help us to achieve our mission more efficiently and effectively.
Holden wrote:
I’m aware that SI has relatively new leadership that is attempting to address the issues behind some of my complaints. I have a generally positive impression of the new leadership; I believe the Executive Director and Development Director, in particular, to represent a step forward in terms of being interested in transparency and in testing their own general rationality. So I will not be surprised if there is some improvement in the coming years...
Louie Helm was hired as Director of Development in September 2011. I was hired as a Research Fellow that same month, and made Executive Director in November 2011. Below are some changes made since September. (Pardon the messy presentation: LW cannot correctly render tables in comments.)
SI before Sep. 2011: Very few peer-reviewed research publications. SI today: More peer-reviewed publications coming in 2012 than in all past years combined. Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.
SI before Sep. 2011: No donor database / a very broken one. SI today: A comprehensive donor database.
SI before Sep. 2011: Nearly all work performed directly by SI staff. SI today: Most work outsourced to remote collaborators so that SI staff can focus on the things that only they can do.
SI before Sep. 2011: No strategic plan. SI today: A strategic plan developed with input from all SI staff, and approved by the Board.
SI before Sep. 2011: Very little communication about what SI is doing. SI today: Monthly progress reports, plus three Q&AswithLuke about SI research and organizational development.
SI before Sep. 2011: Very little direct management of staff and projects. SI today: Luke monitors all projects and staff work, and meets regularly with each staff member.
SI before Sep. 2011: Almost no detailed tracking of the expense of major SI projects (e.g. Summit, papers, etc.). The sole exception seems to be that Amy was tracking the costs of the 2011 Summit in NYC. SI today: Detailed tracking of the expense of major SI projects for which this is possible (Luke has a folder in Google docs for these spreadsheets, and the summary spreadsheet is shared with the Board).
SI before Sep. 2011: No staff worklogs. SI today: All staff members share their worklogs with Luke, Luke shares his worklog with all staff plus the Board.
SI before Sep. 2011: Best practices not followed for bookkeeping/accounting; accountant’s recommendations ignored. SI today: Meetings with consultants about bookkeeping/accounting; currently working with our accountant to implement best practices and find a good bookkeeper.
SI before Sep. 2011: Staff largely separated, many of them not well-connected to the others. SI today: After a dozen or so staff dinners, staff much better connected, more of a team.
SI before Sep. 2011: Want to see the basics of AI Risk explained in plain language? Read The Sequences (more than a million words) or this academic book chapter by Yudkowsky. SI today: Want to see the basics of AI Risk explained in plain language? Read Facing the Singularity (now in several languages, with more being added) or listen to the podcast version.
SI before Sep. 2011: A hard-to-navigate website with much outdated content. SI today: An entirely new website that is easier to navigate and has much new content (nearly complete; should launch in May or June).
SI before Sep. 2011: So little monitoring of funds that $118k was stolen in 2010 before SI noticed. (Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.) SI today: Our bank accounts have been consolidated, with 3-4 people regularly checking over them.
SI before Sep. 2011: SI publications exported straight to PDF from Word or Google Docs, sometimes without even author names appearing. SI today: All publications being converted into slick, useable LaTeX template (example), with all references checked and put into a central BibTeX file.
SI before Sep. 2011: No write-up of our major public technical breakthrough (TDT) using the mainstream format and vocabulary comprehensible to most researchers in the field (this is what we have at the moment). SI today: Philosopher Rachael Briggs, whose papers on decision theory have been twice selected for the Philosopher’s Annual, has been contracted to write an explanation of TDT and publish it in one of a select few leading philosophy journals.
SI before Sep. 2011: No explicit effort made toward efficient use of SEO or our (free) Google Adwords. SI today: Highly optimized use of Google Adwords to direct traffic to our sites; currently working with SEO consultants to improve our SEO (of course, the new website will help).
(Just to be clear, I think this list shows not that “SI is looking really great!” but instead that “SI is rapidly improving and finally reaching a ‘basic’ level of organizational function.”)
...which is not to say, of course, that things were not improving before September 2011. It’s just that the improvements have accelerated quite a bit since then.
For example, Amy was hired in December 2009 and is largely responsible for these improvements:
Built a “real” Board and officers; launched monthly Board meetings in February 2010.
Began compiling monthly financial reports in December 2010.
Began tracking Summit expenses and seeking Summit sponsors.
Played a major role in canceling many programs and expenses that were deemed low ROI.
And note that these improvements would not and could not have happened without more funding than the level of previous years—if, say, everyone had been waiting to see these kinds of improvements before funding.
note that these improvements would not and could not have happened without more funding than the level of previous years
Really? That’s not obvious to me. Of course you’ve been around for all this and I haven’t, but here’s what I’m seeing from my vantage point...
Recent changes that cost very little:
Donor database
Strategic plan
Monthly progress reports
A list of research problems SI is working on (it took me 16 hours to write)
IntelligenceExplosion.com, Friendly-AI.com, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk (each of these took me only 10-25 hours to create)
Detailed tracking of the expenses for major SI projects
Staff worklogs
Staff dinners (or something that brought staff together)
A few people keeping their eyes on SI’s funds so theft would be caught sooner
Optimization of Google Adwords
Stuff that costs less than some other things SI had spent money on, such as funding Ben Goertzel’s AGI research or renting downtown Berkeley apartments for the later visiting fellows:
Research papers
Management of staff and projects
Rachael Briggs’ TDT write-up
Best-practices bookkeeping/accounting
New website
LaTeX template for SI publications; references checked and then organized with BibTeX
SEO
Do you disagree with these estimates, or have I misunderstood what you’re claiming?
A lot of charities go through this pattern before they finally work out how to transition from a board-run/individual-run tax-deductible band of conspirators to being a professional staff-run organisation tuned to doing the particular thing they do. The changes required seem simple and obvious in hindsight, but it’s a common pattern for it to take years, so SIAI has been quite normal, or at the very least not been unusually dumb.
(My evidence is seeing this pattern close-up in the Wikimedia Foundation, Wikimedia UK (the first attempt at which died before managing it, the second making it through barely) and the West Australian Music Industry Association, and anecdotal evidence from others. Everyone involved always feels stupid at having taken years to achieve the retrospectively obvious. I would be surprised if this aspect of the dynamics of nonprofits had not been studied.)
edit: Luke’s recommendation of The Nonprofit Kit For Dummies looks like precisely the book all the examples I know of needed to have someone throw at them before they even thought of forming an organisation to do whatever it is they wanted to achieve.
I don’t think this response supports your claim that these improvements “would not and could not have happened without more funding than the level of previous years.”
I know your comment is very brief because you’re busy at minicamp, but I’ll reply to what you wrote, anyway: Someone of decent rationality doesn’t just “try things until something works.” Moreover, many of the things on the list of recent improvements don’t require an Amy, a Luke, or a Louie.
I don’t even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.
When I was made Executive Director and phoned our Advisors, most of them said “Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!”
That is the kind of thing that makes me want to say that SingInst has “tested every method except the method of trying.”
Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping… these are all literally from the Nonprofits for Dummies book.
Maybe these things weren’t done for 11 years because SI’s decision-makers did make good plans but failed to execute them due to the usual defeaters. But that’s not the history I’ve heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I’ve heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.
Money wasn’t the barrier to doing many of those things, it was a gap in general rationality.
I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.
At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn’t pretty. (And I’m not the only SIer who felt this way at the time.)
But now I do feel comfortable asking people to donate to SingInst. I’m excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.
Luke has just told me (personal conversation) that what he got from my comment was, “SIAI’s difficulties were just due to lack of funding” which was not what I was trying to say at all. What I was trying to convey was more like, “I didn’t have the ability to run this organization, and knew this—people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn’t succeed in doing so either—and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director”.
Update: I came out of a recent conversation with Eliezer with a higher opinion of Eliezer’s general rationality, because several things that had previously looked to me like unforced, forseeable mistakes by Eliezer now look to me more like non-mistakes or not-so-forseeable mistakes.
It’s Luke you should have fallen in love with, since he is the one turning things around.
On the other hand I can count with one hand the number of established organisations I know of that would be sociologically capable of ceding power, status and control to Luke the way SingInst did. They took an untrained intern with essentially zero external status from past achievements and affiliations and basically decided to let him run the show (at least in terms of publicly visible initiatives). It is clearly the right thing for SingInst to do and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections—but still, making the appointment goes fundamentally against normal human behavior.
(Where I say “count with one hand” I am not including the use of any digits thereupon. I mean one.)
As a minor note, observe that claims of extraordinary rationality do not necessarily contradict claims of irrationality. The sanity waterline is very low.
Do you mean to imply in context here that the organizational management of SIAI at the time under discussion was above average for a nonprofit organization? Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality? I certainly agree with the latter.
Are you comparing it to the average among nonprofits started, or nonprofits extant? I would guess that it was well below average for extant nonprofits, but about or slightly above average for started nonprofits. I’d guess that most nonprofits are started by people who don’t know what they’re doing and don’t know what they don’t know, and that SI probably did slightly better because the people who were being a bit stupid were at least very smart, which can help. However, I’d guess that most such nonprofits don’t live long because they don’t find a Peter Thiel to keep them alive.
Your assessment looks about right to me. I have considerable experience of averagely-incompetent nonprofits, and SIAI looks normal to me. I am strongly tempted to grab that “For Dummies” book and, if it’s good, start sending copies to people …
I don’t see what’s the point to comparing to average nonprofits. Average for-profits don’t realize any profit, and average non-profits just waste money.
I would say SIAI is best paralleled to average started ‘research’ organization that is developing some free energy something, run by non-scientists, with some hired scientists as chaff.
Sadly, I agree. Unless you look at it very closely, SIAI pattern-matches to “crackpots trying to raise money to fund their crackpottiness” fairly well. (What saves them is that their ideas are a lot better than the average crackpot.)
Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality?
Yes, this.
On an arbitrary scale I just made up, below 100 degrees of rationality is “irrational”, and 0 degrees of rationality is “ordinary”. 50 is extraordinarily rational and yet irrational.
50 while you’re thinking you’re at 100 is being an extraordinary loser (overconfidence leads to big failures)
In any case this is just word play. Holden seen many organizations that are/were more rational, that’s probably what he means by lack of extraordinary rationality.
Just to let you know, you’ve just made it on my list of the very few LW regulars I no longer bother replying to, due to the proven futility of any communications. In your case it is because you have a very evident ax to grind, which is incompatible with rational thought.
This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn’t he have an ax to grind? More of one really, since this ax chops trees of gold.
It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.
From http://www.usingenglish.com: “If you have an axe to grind with someone or about something, you have a grievance, a resentment and you want to get revenge or sort it out.” One can hardly call the unacknowledged emotions of resentment and needing a revenge/retribution compatible with rationality. srdiamond piled a bunch of (partially correct but irrelevant in the context of my comment) negative statements about SI, making these emotions quite clear.
That’s a restrictive definition of “ax to grind,” by the way—it’s normally used to mean any special interest in the subject: “an ulterior often selfish underlying purpose ” (Merriam-Webster’s Collegiate Dictionary)
But I might as well accept your meaning for discussion purposes. If you detect unacknowledged resentment in srdiamond, don’t you detect unacknowledged ambition in Eliezer Yudkowsky?
There’s actually good reason for the broader meaning of “ax to grind.” Any special stake is a bias. I don’t think you can say that someone who you think acts out of resentment, like srdiamond, is more intractably biased than someone who acts out of other forms of narrow self-interest, which almost invariably applies when someone defends something he gets money from.
I don’t think it’s a rational method to treat people differently, as inherently less rational, when they seem resentful. It is only one of many difficult biases. Financial interest is probably more biasing. If you think the arguments are crummy, that’s something else. But the motive—resentment or finances—should probably have little bearing on how a message is treated in serious discussion.
The impression I get from scanning their comment history is that metaphysicist means to suggest here that EY has ambitions he hasn’t acknowledged (e.g., the ambition to make money without conventional credentials), not that he fails to acknowledge any of the ambitions he has.
I don’t think it’s a rational method to treat people differently, as inherently less rational, when they seem resentful.
Thank you for this analysis, it made me think more about my motivations and their validity. I believe that my decision to permanently disengage from discussions with some people is based on the futility of such discussions in the past, not on the specific reasons they are futile. At some point I simply decide to cut my losses.
There’s actually good reason for the broader meaning of “ax to grind.” Any special stake is a bias.
Indeed, present company not excluded. The question is whether it permanently prevents the ax-grinder from listening. EY, too, has his share of unacknowledged irrationalities, but both his status and his ability to listen and to provide insights makes engaging him in a discussion a rewarding, if sometimes frustrating experience.
I don’t not know why srdiamond’s need to bash SI is so entrenched, and whether it can be remedied to a degree where he is once again worth talking to, so at this point it is instrumentally rational for me to avoid replying to him.
Well, all we really know is that he chose to. It may be that everyone he works with then privately berated him for it. That said, I share your sentiment. Actually, if SI generally endorses this sort of public “airing of dirty laundry,” I encourage others involved in the organization to say so out loud.
The largest concern from reading this isn’t really what it brings up in management context, but what it says about the SI in general. Here an area where there’s real expertise and basic books that discuss well-understood methods and they didn’t do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?
(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there’s lots of evidence available as to how effective they are.
Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as ‘self improvement’ via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone’s contemporary paperclip maximizer? By how much?
Incredibly relevant to AI risk, but analysis can’t be faked without really having technical expertise.
I haven’t actually found the right books yet, but these are the things where I decided I should find some “for beginners” text. the important insight is that I’m allowed to use these books as skill/practice/task checklists or catalogues, rather than ever reading them all straight through.
General interest:
Career
Networking
Time management
Fitness
For my own particular professional situation, skills, and interests:
For fitness, I’d found Liam Rosen’s FAQ (the ‘sticky’ from 4chan’s /fit/ board) to be remarkably helpful and information-dense. (Mainly, ‘toning’ doesn’t mean anything, and you should probably be lifting heavier weights in a linear progression, but it’s short enough to be worth actually reading through.)
these are all literally from the Nonprofits for Dummies book. [...] The history I’ve heard is that SI [...]
\
failed to read Nonprofits for Dummies,
I remember that, when Anna was managing the fellows program, she was reading books of the “for dummies” genre and trying to apply them… it’s just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were “what it takes to manage well” (i.e. “basic management”) and “what it takes to be productive”, rather than “what it takes to (help) operate a nonprofit according to best practices”. So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn’t really any cognitive space left over to effectively notice the possibility that those wouldn’t be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen’s skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)
failed to ask advisors for advice,
I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI’s current strategies with them and listened to their suggestions. But I don’t know how much she went out of her way to find people she didn’t already have reasonably reliable positive contact with, to get advice from them.
I don’t know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the “everyone outside’s psychological barriers” side of that, he was at least successful enough to keep SIAI’s public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don’t have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn’t one of those things though.
But the proper approach to retrospective judgement is generally a confusing question.
the kind of thing that makes me want to say [. . .]
The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn’t be bigger than those of the other fires they were trying to put out.
strategic plan [...] SI failed to make these kinds of plans in the first place,
There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn’t on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options—how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.
expenses tracking, funds monitoring [...] some funds monitoring was insisted upon after the large theft
There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time… well, yeah, that didn’t happen.
I agree with a paraphrase of John Maxwell’s characterization: “I’d rather hear Eliezer say ‘thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be “]organizational best practices[“]’, because this seems like a better depiction of what actually happened.” Note that this was most of the purpose of the Fellows program in the first place—to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.
Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management
FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you’re just imagining this retroactively given that that’s what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was “too competent” and that I should go do something more useful with my talent, like start another business… not “waste my time working directly at SI.”
“I’d rather hear Eliezer say ‘thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be “]organizational best practices[“]’, because this seems like a better depiction of what actually happened.”
… which Eliezer has read and responded to, noting he did indeed read just that book in 2000 when he was founding SIAI. This suggests having someone of Luke’s remarkable drive was in fact the missing piece of the puzzle.
Fascinating! I want to ask “well, why didn’t it take then?”, but if I were in Eliezer’s shoes I’d be finding this discussion almost unendurably painful right now, and it feels like what matters has already been established. And of course he’s never been the person in charge of that sort of thing, so maybe he’s not who we should be grilling anyway.
Obviously we need How to be Lukeprog for Dummies. Luke appears to have written many fragments for this, of course.
Beating oneself up with hindsight bias is IME quite normal in this sort of circumstance, but not actually productive. Grilling the people who failed makes it too easy to blame them personally, when it’s a pattern I’ve seen lots and lots, suggesting the problem is not a personal failing.
Agreed entirely—it’s definitely not a mark of a personal failing. What I’m curious about is how we can all learn to do better at the crucial rationalist skill of making use of the standard advice about prosaic tasks—which is manifestly a non-trivial skill.
The Bloody Obvious For Dummies. If only common sense were!
From the inside (of a subcompetent charity—and I must note, subcompetent charities know they’re subcompetent), it feels like there’s all this stuff you’re supposed to magically know about, and lots of “shut up and do the impossible” moments. And you do the small very hard things, in a sheer tour de force of remarkable effort. But it leads to burnout. Until the organisation makes it to competence and the correct paths are retrospectively obvious.
That actually reads to me like descriptions I’ve seen of the startup process.
The problem is that there are two efficiencies/competences here, the efficiency as in doing the accounting correctly, which is relatively easy in comparison to the second: the efficiency as in actually doing relevant novel technical work that matters. The former you could get advice from some books, the latter you won’t get any advice on, it’s a harder problem, and typical level of performance is exactly zero (even for those who get the first part right). The difference in difficulties is larger than that between building a robot kit by following instructions vs designing a ground breaking new robot and making a billion dollars off it.
The best advice to vast majority of startups is: dissolve startup and get normal jobs, starting tomorrow. The best advice to all is to take a very good look at themselves knowing that the most likely conclusion should be “dissolve and get normal jobs”. The failed startups I’ve seen so far were propelled by pure, unfounded belief in themselves (like in a movie where someone doesn’t want to jump, other says yes you can do that!! then that person jumps, but rather than sending positive message and jumping over and surviving, falls down to instant death, while the fire that the person was running away from just goes out). The successful startups, on the other hand, had very well founded belief in themselves (good track record, attainable goals), or started from a hobby project that gone successful.
Judging from the success rate that VCs have at predicting successful startups, I conclude that the “pure unfounded belief on the one hand, well-founded belief on the other” metric is not easily applied to real organizations by real observers.
Mm. This is why an incompetent nonprofit can linger for years: no-one is doing what they do, so they feel they still have to exist, even though they’re not achieving much, and would have died already as a for-profit business. I am now suspecting that the hard part for a nonprofit is something along the lines of working out what the hell you should be doing to achieve your goal. (I would be amazed if there were not extensive written-up research in this area, though I don’t know what it is.)
That book looks like the basic solution to the pattern I outline here, and from your description, most people who have any public good they want to achieve should read it around the time they think of getting a second person involved.
Rumsfeld is speaking of the Iraq war. It was an optional war, the army turned out to be far understrength for establishing order, and they deliberately threw out the careful plans for preserving e.g. Iraqi museums from looting that had been drawn up by the State Department, due to interdepartmental rivalry.
This doesn’t prove the advice is bad, but at the very least, Rumsfeld was just spouting off Deep Wisdom that he did not benefit from spouting; one would wish to see it spoken by someone who actually benefited from the advice, rather than someone who wilfully and wantonly underprepared for an actual war.
I think the quote is an alternative translation of paragraph 15 in the link above:
“Thus it is that in war the victorious strategist only seeks battle after the victory has been won, whereas he who is destined to defeat first fights and afterwards looks for victory.”
It has an associated commentary:
Ho Shih thus expounds the paradox: “In warfare, first lay plans which will ensure victory, and then lead your army to battle; if you will not begin with stratagem but rely on brute strength alone, victory will no longer be assured.”
I don’t see the circularity. Just because a warrior is victorious doesn’t necessarily mean they won before going to war; it might be instead that victorious warriors go to war first and then seek to win, and defeated warriors do the same thing. Can you spell out the circularity?
Unless you interpret “win first” as “prepare for every eventuality, calculate the unbiased probability of winning and be comfortable with the odds when going to battle”, “win first” can only be meaningfully applied in retrospect.
I think you’ve stumbled upon the correct interpretation.
Sun Tzu was fond of making warfare about strategy and logistics rather than battles, so that one would only fight when victory is a foregone conclusion.
And note that these improvements would not and could not have happened without more funding than the level of previous years
Given the several year lag between funding increases and the listed improvements, it appears that this was less a result of a prepared plan and more a process of underutilized resources attracting a mix of parasites (the theft) and talent (hopefully the more recent staff additions).
Which goes towards a critical question in terms of future funding: is SIAI primarily constrained in its mission by resources or competence?
Of course, the related question is: what is SIAI’s mission? Someone donating primarily for AGI research might not count recent efforts (LW, rationality camps, etc) as improvements.
What should a potential donor expect from money invested into this organization going forward? Internally, what are your metrics for evaluation?
Edited to add: I think that the spin-off of the rationality efforts is a good step towards answering these questions.
I’m pretty sure their combined salaries are lower than the cost of the summer fellows program that SI was sponsoring four or five years ago. Also, if you accept my assertion that Luke could find a way to do it on a limited budget, why couldn’t somebody else?
Givewell is interested in finding charities that translate good intentions into good results. This requires that the employees of the charity have low akrasia, desire to learn about and implement organizational best practices, not suffer from dysrationalia, etc. I imagine that from Givewell’s perspective, it counts as a strike against the charity if some of the charity’s employees have a history of failing at any of these.
I’d rather hear Eliezer say “thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and care about organizational best practices”, because this seems like a better depiction of what actually happened. I don’t get the impression SI was actively looking for folks like Louie and Luke.
Yes to this. Eliezer’s claim about the need for funding may suffer many of Luke’s criticisms above. But usually the most important thing you need is talent and that does require funding.
My hope is that the upcoming deluge of publications will answer this objection, but for the moment, I am unclear as to the justification for the level of resources being given to SIAI researchers.
Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.
This level of freedom is the dream of every researcher on the planet. Yet, it’s unclear why these resources should be devoted to your projects. While I strongly believe that the current academic system is broken, you are asking for a level of support granted to top researchers prior to have made any original breakthroughs yourself.
If you can convince people to give you that money, wonderful. But until you have made at least some serious advancement to demonstrate your case, donating seems like an act of faith.
It’s impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system and I will be delighted to see that development bear fruit over the coming years. But, at present, I don’t see evidence that the work being done justifies or requires that support.
This level of freedom is the dream of every researcher on the planet. Yet, it’s unclear why these resources should be devoted to your projects.
Because some people like my earlierpapers and think I’m writing papers on the most important topic in the world?
It’s impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system...
Note that this isn’t uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.
First, let me say that, after re-reading, I think that my previous post came off as condescending/confrontational which was not my intent. I apologize.
Second, after thinking about this for a few minutes, I realized that some of the reason your papers seem so fluffy to me is that they argue what I consider to be obvious points. In my mind, of course we are likely “to develop human-level AI before 2100.” Because of that, I may have tended to classify your work as outreach more than research.
But outreach is valuable. And, so that we can factor out the question of the independent contribution of your research, having people associated with SIAI with the publications/credibility to be treated as experts has gigantic benefits in terms of media multipliers (being the people who get called on for interviews, panels, etc). So, given that, I can see a strong argument for publication support being valuable to the overall organization goals regardless of any assessment of the value of the research.
Note that this isn’t uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.
My only point was that, in those situations, usually researchers are brought in with prior recognized achievements (or, unfortunately all too often, simply paper credentials). SIAI is bringing in people who are intelligent but unproven and giving them the resources reserved for top talent in academia or industry. As you’ve pointed out, one of the differences with SIAI is the lack of hoops to jump through.
Edit: I see you commented below that you view your own work as summarization of existing research and we agree on the value of that. Sorry that my slow typing speed left me behind the flow of the thread.
It’s true at my company, at least. There are quite a few papers out there authored by the researchers at the company where I work. There are several good business reasons for a company to invest time into publishing a paper; positive PR is one of them.
Because some people like my earlier papers and think I’m writing papers on the most important topic in the world?
But then you put your intellect at issue, and I think I’m entitled to opine that you lack the qualities of intellect that would make such recommendation credible. You’re a budding scholar; a textbook writer at heart. You lack any of the originality of a thinker.
You confirm the lead poster’s allegations that SIA staff are insular and conceited.
Of course you are. And, you may not be one of the people who “like my earlier papers.”
You confirm the lead poster’s allegations that SIA staff are insular and conceited.
Really? How? I commented earlier on LW (can’t find it now) about how the kind of papers I write barely count as “original research” because for the most part they merely summarize and clarify the ideas of others. But as Beckstead says, there is a strong need for that right now.
For insights in decision theory and FAI theory, I suspect we’ll have to look to somebody besides Luke Muehlhauser. We keep trying to hire such people but they keep saying “No.” (I got two more “no”s just in the last 3 weeks.) Part of that may be due to the past and current state of the organization — and luckily, fixing that kind of thing is something I seem to have some skills with.
Isn’t this very strong evidence in support for Holden’s point about “Apparent poorly grounded belief in SI’s superior general rationality” (excluding Luke, at least)? And especiallythis?
This topic is something I’ve been thinking about lately. Do SIers tend to have superior general rationality, or do we merely escape a few particular biases? Are we good at rationality, or just good at “far mode” rationality (aka philosophy)? Are we good at epistemic but not instrumental rationality? (Keep in mind, though, that rationality is only a ceteris paribus predictor of success.)
Or, pick a more specific comparison. Do SIers tend to be better at general rationality than someone who can keep a small business running for 5 years? Maybe the tight feedback loops of running a small business are better rationality training than “debiasing interventions” can hope to be.
Of course, different people are more or less rational in different domains, at different times, in different environments.
This isn’t an idle question about labels. My estimate of the scope and level of people’s rationality in part determines how much I update from their stated opinion on something. How much evidence for Hypothesis X (about organizational development) is it when Eliezer gives me his opinion on the matter, as opposed to when Louie gives me his opinion on the matter? When Person B proposes to take on a totally new kind of project, I think their general rationality is a predictor of success — so, what is their level of general rationality?
Are we good at epistemic but not instrumental rationality?
Holden implies (and I agree with him) that there’s very little evidence at the moment to suggest that SI is good at instrumental rationality. As for epistemic rationality, how would we know ? Is there some objective way to measure it ? I personally happen to believe that if a person seems to take it as a given that he’s great at epistemic rationality, this fact should count as evidence (however circumstantial) against him being great at epistemic rationality… but that’s just me.
If you accept that your estimate of someone’s “rationality” should depend on the domain, the environment, the time, the context, etc… and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc… it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.
That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development—ideally, someone who has been successful at developing organizations—and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence… and their domain competence is easier to measure than their general rationality.
So is their general rationality worth devoting resources to determining?
It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it’s good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you’d get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).
I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn’t just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.
So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups’ expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other’s domain) are too expensive to be worth it. (Well, assuming the obstacle isn’t that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI’s local potentially relevant trivia simply isn’t practical.)
Yes?
Yeah, that can be a problem.
In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there’s convergence, great. If there’s divergence, iterate.
This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.
In the world in which a varied group of intelligent and especially rational people are organizing to literally save humanity, I don’t see the relatively trivial, but important, improvements you’ve made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you’ve made.
I mean, the question this group should be asking themselves is “how can we best alter the future so as to navigate towards FAI?” So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could’ve been using it to improve the foundation of their cause from which everything else follows?
(Granted, I don’t know the history and inner workings of the SI, and so I could be missing some very significant and immovable hurdles, but I don’t see that as very likely; at least, not as likely as Holden’s scenario.)
I don’t see the relatively trivial, but important, improvements you’ve made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you’ve made.
I don’t know what these sentences mean.
So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could’ve been using it to improve the foundation of their cause from which everything else follows?
Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts. Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.
People are more rational in different domains, environments, and so on.
The people at SI may have poor instrumental rationality while being adept at epistemic rationality.
Being rational doesn’t necessarily mean being successful.
I accept all those points, and yet I still see the Singularity Institute having made the improvements that you’ve made since being hired before you were hired if they have superior general rationality. That is, you wouldn’t have that list of relatively trivial things to brag about because someone else would have recognized the items on that list as important and got them done somehow (ignore any negative connotations—they’re not intended).
For instance, I don’t see a varied group of people with superior general rationality not discovering or just not outsourcing work they don’t have a comparative advantage in (i.e., what you’ve done). That doesn’t look like just a failure in instrumental rationality, or just rationality operating on a different kind of utility function, or just a lack of domain specific knowledge.
The excuses available to a person acting in a way that’s non-traditionally rational are less convincing when you apply them to a group.
Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts.
No, I get that. But that still doesn’t explain away the higher salaries like EY’s 80k/year and its past upwards trend. I mean, these higher paid people are the most committed to the cause, right? I don’t see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being superior in general rationality. It’s like a homeless person desperately in want of shelter trying save enough for an apartment and yet buying meals at some restaurant.
Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.
That’s the point I was making, why wasn’t that done earlier? How did these people apparently miss out on opportunity cost? (And I’m just using outsourcing as an example because it was one of the most glaring changes you made that I think should have probably been made much earlier.)
Right, I think we’re saying the same thing, here: the availability of so much low-hanging fruit in organizational development as late as Sept. 2011 is some evidence against the general rationality of SIers. Eliezer seems to want to say it was all a matter of funding, but that doesn’t make sense to me.
Now, on this:
I don’t see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being super in general rationality.
For some reason I’m having a hard time parsing your sentences for unambiguous meaning, but if I may attempt to rephrase: “SIers wouldn’t take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things.” Is that what you’re saying?
Yes, the Bay Area is expensive. We’ve considered relocating, but on the other hand the (by far) best two places for meeting our needs in HR and in physically meeting with VIPs are SF and NYC, and if anything NYC is more expensive than the Bay Area. We cut living expenses where we can: most of us are just renting individual rooms.
Also, of course, it’s not like the Board could decide we should relocate to a charter city in Honduras and then all our staff would be able to just up and relocate. :)
(Rain may know all this; I’m posting it for others’ benefit.)
I think it’s crucial that SI stay in the Bay Area. Being in a high-status place signals that the cause is important. If you think you’re not taken seriously enough now, imagine if you were in Honduras…
Not to mention that HR is without doubt the single most important asset for SI. (Which is why it would probably be a good idea to pay more than the minimum cost of living.)
FWIW, Wikimedia moved from Florida to San Francisco precisely for the immense value of being at the centre of things instead of the middle of nowhere (and yes, Tampa is the middle of nowhere for these purposes, even though it still has the primary data centre). Even paying local charity scale rather than commercial scale (there’s a sort of cycle where WMF hires brilliant kids, they do a few years working at charity scale then go to Facebook/Google/etc for gobs of cash), being in the centre of things gets them staff and contacts they just couldn’t get if they were still in Tampa. And yes, the question came up there pretty much the same as it’s coming up here: why be there instead of remote? Because so much comes with being where things are actually happening, even if it doesn’t look directly related to your mission (educational charity, AI research institute).
The charity is still registered in Florida but the office is in SF. I can’t find the discussion on a quick search, but all manner of places were under serious consideration—including the UK, which is a horrible choice for legal issues in so very many ways.
In our experience, monkeys don’t work that way. It sounds like it should work, and then it just… doesn’t. Of course we do lots of Skyping, but regular human contact turns out to be pretty important.
(nods) Yeah, that’s been my experience too, though I’ve often suspected that companies like Google probably have a lot of research on the subject lying around that might be informative.
Some friends of mine did some experimenting along these lines when doing distributed software development (in both senses) and were somewhat startled to realize that Dark Age of Camelot worked better for them as a professional conferencing tool than any of the professional conferencing tools their company had. They didn’t mention this to their management.
and were somewhat startled to realize that Dark Age of Camelot worked better for them as a professional conferencing tool than any of the professional conferencing tools their company had. They didn’t mention this to their management.
I am reminded that Flickr started as a photo add-on for an MMORPG...
Enough for you to agree with Holden on that point?
“SIers wouldn’t take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things.” Is that what you’re saying?
Yes, but I wouldn’t set a limit at a specific salary range; I’d expect them to give as much as they optimally could, because I assume they’re more concerned with the cause than the money. (re the 70k/yr mention: I’d be surprised if that was anywhere near optimal)
Enough for you to agree with Holden on that point?
Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.
Yes, but I wouldn’t set a limit at a specific salary range; I’d expect them to give as much as they optimally could, because I assume they’re more concerned with the cause than the money. (re the 70k/yr mention: I’d be surprised if that was anywhere near optimal)
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.
So, if you disagree with Holden, I assume you think SIers have superior general rationality: why?
And I’m confident SIers will score well on rationality tests, but that looks like specialized rationality. I.e., you can avoid a bias but you can’t avoid a failure in your achieving your goals. To me, the SI approach seems poorly leveraged. I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? YOU WANT TO WIN?! Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
That isn’t as bad as I thinking it was; I don’t know if that’s optimal, but it seems at least reasonable.
I assume you think SIers have superior general rationality: why?
I’ll avoid double-labor on this and wait to reply until my conversation with Holden is done.
I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? …Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.
Right. Exercise the neglected virtue of scholarship and all that.
It’s not that easy to dismiss; if it’s as poorly leveraged as it looks relative to other approaches then you have little reason to be spreading and teaching SI’s brand of specialized rationality (except for perhaps income).
Weird, I have this perception of SI being heavily invested in overcoming biases and epistemic rationality training to the detriment of relevant domain specific knowledge, but I guess that’s wrong?
I’m not dismissing it, I’m endorsing it and agreeing with you that it has been my approach ever since my first post on LW.
I wasn’t talking about you; I was talking about SI’s approach in spreading and training rationality. You(SI) have Yudkowsky writing books, you have rationality minicamps, you have lesswrong, you and others are writing rationality articles and researching the rationality literature, and so on.
That kind of rationality training, research, and message looks poorly leveraged in achieving your goals, is what I’m saying. Poorly leveraged for anyone trying to achieve goals. And at its most abstract, that’s what rationality is, right? Achieving your goals.
So, I don’t care if your approach was to acquire as much relevant knowledge as possible before dabbling in debiasing, bayes, and whatnot (i.e., prioritizing the most leveraged approach). I wondering why your approach doesn’t seem to be SI’s approach. I’m wondering why SI doesn’t prioritize rationality training, research, and message by whatever is the most leveraged in achieving SI’s goals. I’m wondering why SI doesn’t spread the virtue of scholarship to the detriment of training debiasing and so on.
SI wants to raise the sanity waterline, is what the SI doing even near optimal for that? Knowing what SIers knew and trained for couldn’t even get them to see an opportunity for trading in on opportunity cost for years; that is sad.
(Disclaimer: the following comment should not be taken to imply that I myself have concluded that SI staff salaries should be reduced.)
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
I’ll grant you that it’s pretty low relative to other Bay Area salaries. But as for the actual cost of living, I’m less sure.
I’m not fortunate enough to be a Bay Area resident myself, but here is what the internet tells me:
After taxes, a $48,000/yr gross salary in California equates to a net of around $3000/month.
A 1-bedroom apartment in Berkeley and nearby places can be rented for around $1500/month. (Presumably, this is the category of expense where most of the geography-dependent high cost of living is contained.)
If one assumes an average spending of $20/day on food (typically enough to have at least one of one’s daily meals at a restaurant), that comes out to about $600/month.
That leaves around $900/month for miscellaneous expenses, which seems pretty comfortable for a young person with no dependents.
So, if these numbers are right, it seems that this salary range is actually right about what the cost of living is. Of course, this calculation specifically does not include costs relating to signaling (via things such as choices of housing, clothing, transportation, etc.) that one has more money than necessary to live (and therefore isn’t low-status). Depending on the nature of their job, certain SI employees may need, or at least find it distinctly advantageous for their particular duties, to engage in such signaling.
The point is that we’re consequentialists, and lowering salaries even further would save money (on salaries) but result in SI getting less done, not more — for the same reason that outsourcing fewer tasks would save money (on outsourcing) but cause us to get less done, not more.
You say this as though it’s obvious, but if I’m not mistaken, salaries used to be about 40% of what they are now, and while the higher salaries sound like they are making a major productivity difference, hiring 2.5 times as many people would also make a major productivity difference. (Though yes, obviously marginal hires would be lower in quality.)
I don’t think salaries were ever as low as 40% of what they are now. When I came on board, most people were at $36k/yr.
To illustrate why lower salaries means less stuff gets done: I’ve been averaging 60 hours per week, and I’m unusually productive. If I am paid less, that means that (to pick just one example from this week) I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
I don’t think that you’re really responding to Steven’s point. Yes, as Steven said, if you were paid less then clearly that would impose more costs on you, so ceteris paribus your getting paid less would be bad. But, as Steven said, the opportunity cost is potentially very high. You haven’t made a rationally compelling case that the missed opportunity is “totally not worth it” or that heeding it would be “profoundly stupid”, you’ve mostly just re-asserted your conclusion, contra Steven’s objection. What are your arguments that this is the case? Note that I personally think it’s highly plausible that $40-50k/yr is optimal, but as far as I can see you haven’t yet listed any rationally compelling reasons to think so.
(This comment is a little bit sterner than it would have been if you hadn’t emphatically asserted that conclusions other than your own would be “profoundly stupid” without first giving overwhelming justification for your conclusion. It is especially important to be careful about such apparent overconfidence on issues where one clearly has a personal stake in the matter.)
I will largely endorse Will’s comment, then bow out of the discussion, because this appears to be too personal and touchy a topic for a detailed discussion to be fruitful.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
If so, I suspect they were burning through savings during this time or had some kind of cheap living arrangement that I don’t have.
What are your arguments that [paying you less wouldn’t be worth it]?
I couldn’t really get by on less, so paying me less would cause me to quit the organization and do something else instead, which would cause much of this good stuff to probably not happen.
It’s VERY hard for SingInst to purchase value as efficiently as by purchasing Luke-hours. At $48k/yr for 60 hrs/wk, I make $15.38/hr, and one Luke-hour is unusually productive for SingInst. Paying me less and thereby causing me to work fewer hours per week is a bad value proposition for SingInst.
paying me less would require me to do things that take up time and energy in order to get by with a smaller income. Then, assuming all goes well, future intergalactic civilizations would look back and think this was incredibly stupid; in much the same way that letting billions of person-containing brains rot in graves, and humanity allocating less than a million dollars per year to the Singularity Institute, would predictably look pretty stupid in retrospect. At Singularity Institute board meetings we at least try not to do things which will predictably make future intergalactic civilizations think we were being willfully stupid. That’s all there is to it, and no more.
This seems to me unnecessarily defensive. I support the goals of SingInst, but I could never bring myself to accept the kind of salary cut you guys are taking in order to work there. Like every other human on the planet, I can’t be accurately modelled with a utility function that places any value on far distant strangers; you can more accurately model what stranger-altruism I do show as purchase of moral satisfaction, though I do seek for such altruism to be efficient. SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfil its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise, with no more than a normal adjustment for altruistic work much as if they were working for an animal sanctuary.
I could never bring myself to accept the… salary cut you guys are taking in order to work [at SI]… SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfill its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise...
So when I say “unnecessarily defensive”, I mean that all the stuff about the cost of taxis is after-the-fact defensive rationalization; it can’t be said about a single dollar you spend on having a life outside of SI. The truth is that even the best human rationalist in the world isn’t going to agree to giving those up, and since you have to recruit humans, you’d best pay the sort of salary that is going to attract and retain them. That of course includes yourself.
The same goes for saying “move to the Honduras”. Your perfectly utility-maximising AGIs will move to the Honduras, but your human staff won’t; they want to live in places like the Bay Area.
As katydee and thomblake say, I mean that working for SingInst would mean a bigger reduction in my salary than I could currently bring myself to accept. If I really valued the lives of strangers as a utilitarian, the benefits to them of taking a salary cut would be so huge that it would totally outweigh the costs to me. But it looks like I only really place direct value on the short-term interests of myself and those close to me, and everything else is purchase of moral satisfaction. Happily, purchase of moral satisfaction can still save the world if it is done efficiently.
Since the labour pool contains only human beings, with no true altruistic utility maximizers, SingInst should hire and pay accordingly; the market shows that people will accept a lower salary for a job that directly does good, but not a vastly lower salary. It would increase SI-utility if Luke accepted a lower salary, but it wouldn’t increase Luke-utility, and driving Luke away would cost a lot of SI-utility, so calling for it is in the end a cheap shot and a bad recommendation.
I live in London, which is also freaking expensive—but so are all the places I want to live. There’s a reason people are prepared to pay more to live in these places.
Indeed. I guess “taking a cut” can sometimes mean “taking some of the money”, so you could interpret this as meaning “I couldn’t accept all that money”, which as you say is the opposite of what I meant!
I think the standard answer is that the networking and tech industry connections available in the Bay Area are useful enough to SIAI to justify the high costs of operating there.
I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
...I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.
Plenty of people make less than you and work harder than you. Look in every major city and you will find plenty of people that fit this category, both in business and labor.
“That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.”
Elitism plus demanding that you don’t have to budget. Seems that you need to work more and focus less on how “awesome” you are.
You make good contributions...but let’s not get carried away.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
What we really need is some kind of emotionless robot who doesn’t care about its own standard of living and who can do lots of research and run organizations and suchlike without all the pesky problems introduced by “being human”.
That’s not actually that good, I don’t think—I go to a good college, and I know many people who are graduating to 60k-80k+ jobs with recruitment bonuses, opportunities for swift advancement, etc. Some of the best people I know could literally drop out now (three or four weeks prior to graduation) and immediately begin making six figures.
SIAI wages certainly seem fairly low to me relative to the quality of the people they are seeking to attract, though I think there are other benefits to working for them that cause the organization to attract skillful people regardless.
Ouch. I’d like to think that the side benefits for working for SIAI outweigh the side benefits for working for whatever soulless corporation Dilbert’s workplace embodies, though there is certainly a difference between side benefits and actual monetary compensation.
I graduated ~5 years ago with a engineering degree from a first tier University and I would have consider those starting salaries to be low to decent and not high. This is especially true in places with high cost of living like the bay area.
Having a good internship durring college often ment starting out at 60k/yr if not higher.
If this is significantly different for engineers exiting first tier University now it would be interesting to know.
To summarize and rephrase: in a “counterfactual” world where SI was actually rational, they would have found all these solutions and done all these things long ago.
Many of your sentences are confusing because you repeatedly use the locution “I see X”/ “I don’t see X” in a nonstandard way, apparently to mean “X would have happened” /”X would not have happened”.
This is not the way that phrase is usually understood. Normally, “I see X” is taken to mean either “I observe X” or “I predict X”. For example I might say (if I were so inclined):
Unlike you, I see a lot of rationality being demonstrated by SI employees.
meaning that I believe (from my observation) they are in fact being rational. Or, I might say:
I don’t see Luke quitting his job at SI tomorrow to become a punk rocker.
meaning that I don’t predict that will happen. But I would not generally say:
* I don’t see these people taking a higher salary.
if what I mean is “these people should/would not have taken a higher salary [if such-and-such were true]”.
Oh, I see ;) Thanks. I’ll definitely act on your comment, but I was using “I see X” as “I predict X”—just in the context of a possible world. E.g., I predict in the possible world in which SIers are superior in general rationality and committed to their cause, Luke wouldn’t have that list of accomplishments. Or, “yet I still see the Singularity Institute having made the improvements...”
I now see that I’ve been using ‘see’ as syntactic sugar for counterfactual talk… but no more!
I was using “I see X” as “I predict X”—just in the context of a possible world.
To get away with this, you really need, at minimum, an explicit counterfactual clause (“if”, “unless”, etc.) to introduce it: “In a world where SIers are superior in general rationality, I don’t see Luke having that list of accomplishments.”
The problem was not so much that your usage itself was logically inconceivable, but rather that it collided with the other interpretations of “I see X” in the particular contexts in which it occurred. E.g. “I don’t see them taking higher salaries” sounded like you were saying that they weren’t taking higher salaries. (There was an “if” clause, but it came way too late!)
That might be informative if we knew anything about your budget, but without any sort of context it sounds purely obfuscatory. (Also, your bank account is pretty close to my annual salary, so you might want to consider what you’re actually signalling here and to whom.)
Apparent poorly grounded belief in SI’s superior general rationality
I found this complaint insufficiently detailed and not well worded.
Average people think their rationality is moderately good. Average people are not very rational. SI affiliated people think they are adept or at least adequate at rationality. SI affiliated people are not complete disasters at rationality.
SI affiliated people are vastly superior to others in generally rationality. So the original complaint literally interpreted is false.
An interesting question might be on the level of: “Do SI affiliates have rationality superior to what the average person falsely believes his or her rationality is?”
Holden’s complaints each have their apparent legitimacy change differently under his and my beliefs. Some have to do with overconfidence or incorrect self-assessment, others with other-assessment, others with comparing SI people to others. Some of them:
Insufficient self-skepticism given how strong its claims are
Largely agree, as this relates to overconfidence.
...and how little support its claims have won.
Moderately disagree, as this relies on the rationality of others.
Being too selective (in terms of looking for people who share its preconceptions) when determining whom to hire and whose feedback to take seriously.
Largely disagree, as this relies significantly on the competence of others.
Paying insufficient attention to the limitations of the confidence one can have in one’s untested theories, in line with my Objection 1.
Largely agree, as this depends more on accurate assessment of one’s on rationality.
Rather than endorsing “Others have not accepted our arguments, so we will sharpen and/or reexamine our arguments,” SI seems often to endorse something more like “Others have not accepted their arguments because they have inferior general rationality,” a stance less likely to lead to improvement on SI’s part.
There is instrumental value in falsely believing others to have a good basis for disagreement so one’s search for reasons one might be wrong is enhanced. This is aside from the actual reasons of others.
It is easy to imagine an expert in a relevant field objecting to SI based on something SI does or says seeming wrong, only to have the expert couch the objection in literally false terms, perhaps ones that flow from motivated cognition and bear no trace of the real, relevant reason for the objection. This could be followed by SI’s evaluation and dismissal of it and failure of a type not actually predicted by the expert...all such nuances are lost in the literally false “Apparent poorly grounded belief in SI’s superior general rationality.”
Such a failure comes to mind and is easy for me to imagine as I think this is a major reason why “Lack of impressive endorsements” is a problem. The reasons provided by experts for disagreeing with SI on particular issues are often terrible, but such expressions are merely what they believe their objections to be, and their expertise is in math or some such, not in knowing why they think what they think.
As a supporter and donor to SI since 2006, I can say that I had a lot of specific criticisms of the way that the organization was managed. The points Luke lists above were among them. I was surprised that on many occasions management did not realize the obvious problems and fix them.
But the current management is now recognizing many of these points and resolving them one by one, as Luke says. If this continues, SI’s future looks good.
(Why was this downvoted? If it’s because the downvoter wants to see fewer brain farts, they’re doing it wrong, because the message such a downvote actually conveys is that they want to see fewer acknowledgements of brain farts. Upvoted back to 0, anyway.)
The things posted here are not impressive enough to make me more likely to donate to SIAI and I doubt they appear so for others on this site, especially the many lurkers/infrequent posters here.
Update: My full response to Holden is now here.
As Holden said, I generally think that Holden’s objections for SI “are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on ‘tool’ AI),” and we are working hard to fix both categories of issues.
In this comment I would merely like to argue for one small point: that the Singularity Institute is undergoing comprehensive changes — changes which I believe to be improvements that will help us to achieve our mission more efficiently and effectively.
Holden wrote:
Louie Helm was hired as Director of Development in September 2011. I was hired as a Research Fellow that same month, and made Executive Director in November 2011. Below are some changes made since September. (Pardon the messy presentation: LW cannot correctly render tables in comments.)
SI before Sep. 2011: Very few peer-reviewed research publications.
SI today: More peer-reviewed publications coming in 2012 than in all past years combined. Additionally, I alone have a dozen papers in development, for which I am directing every step of research and writing, and will write the final draft, but am collaborating with remote researchers so as to put in only 5%-20% of the total hours required myself.
SI before Sep. 2011: No donor database / a very broken one.
SI today: A comprehensive donor database.
SI before Sep. 2011: Nearly all work performed directly by SI staff.
SI today: Most work outsourced to remote collaborators so that SI staff can focus on the things that only they can do.
SI before Sep. 2011: No strategic plan.
SI today: A strategic plan developed with input from all SI staff, and approved by the Board.
SI before Sep. 2011: Very little communication about what SI is doing.
SI today: Monthly progress reports, plus three Q&As with Luke about SI research and organizational development.
SI before Sep. 2011: No list of the research problems SI is working on.
SI today: A long, fully-referenced list of research problems SI is working on.
SI before Sep. 2011: Very little direct management of staff and projects.
SI today: Luke monitors all projects and staff work, and meets regularly with each staff member.
SI before Sep. 2011: Almost no detailed tracking of the expense of major SI projects (e.g. Summit, papers, etc.). The sole exception seems to be that Amy was tracking the costs of the 2011 Summit in NYC.
SI today: Detailed tracking of the expense of major SI projects for which this is possible (Luke has a folder in Google docs for these spreadsheets, and the summary spreadsheet is shared with the Board).
SI before Sep. 2011: No staff worklogs.
SI today: All staff members share their worklogs with Luke, Luke shares his worklog with all staff plus the Board.
SI before Sep. 2011: Best practices not followed for bookkeeping/accounting; accountant’s recommendations ignored.
SI today: Meetings with consultants about bookkeeping/accounting; currently working with our accountant to implement best practices and find a good bookkeeper.
SI before Sep. 2011: Staff largely separated, many of them not well-connected to the others.
SI today: After a dozen or so staff dinners, staff much better connected, more of a team.
SI before Sep. 2011: Want to see the basics of AI Risk explained in plain language? Read The Sequences (more than a million words) or this academic book chapter by Yudkowsky.
SI today: Want to see the basics of AI Risk explained in plain language? Read Facing the Singularity (now in several languages, with more being added) or listen to the podcast version.
SI before Sep. 2011: Very few resources created to support others’ research in AI risk.
SI today: IntelligenceExplosion.com, Friendly-AI.com, list of open problems in the field, with references, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk.
SI before Sep. 2011: A hard-to-navigate website with much outdated content.
SI today: An entirely new website that is easier to navigate and has much new content (nearly complete; should launch in May or June).
SI before Sep. 2011: So little monitoring of funds that $118k was stolen in 2010 before SI noticed. (Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.)
SI today: Our bank accounts have been consolidated, with 3-4 people regularly checking over them.
SI before Sep. 2011: SI publications exported straight to PDF from Word or Google Docs, sometimes without even author names appearing.
SI today: All publications being converted into slick, useable LaTeX template (example), with all references checked and put into a central BibTeX file.
SI before Sep. 2011: No write-up of our major public technical breakthrough (TDT) using the mainstream format and vocabulary comprehensible to most researchers in the field (this is what we have at the moment).
SI today: Philosopher Rachael Briggs, whose papers on decision theory have been twice selected for the Philosopher’s Annual, has been contracted to write an explanation of TDT and publish it in one of a select few leading philosophy journals.
SI before Sep. 2011: No explicit effort made toward efficient use of SEO or our (free) Google Adwords.
SI today: Highly optimized use of Google Adwords to direct traffic to our sites; currently working with SEO consultants to improve our SEO (of course, the new website will help).
(Just to be clear, I think this list shows not that “SI is looking really great!” but instead that “SI is rapidly improving and finally reaching a ‘basic’ level of organizational function.”)
...which is not to say, of course, that things were not improving before September 2011. It’s just that the improvements have accelerated quite a bit since then.
For example, Amy was hired in December 2009 and is largely responsible for these improvements:
Built a “real” Board and officers; launched monthly Board meetings in February 2010.
Began compiling monthly financial reports in December 2010.
Began tracking Summit expenses and seeking Summit sponsors.
Played a major role in canceling many programs and expenses that were deemed low ROI.
In addition to reviews, should SI implement a two-man rule for manipulating large quantities of money? (For example, over 5k, over 10k, etc.)
And note that these improvements would not and could not have happened without more funding than the level of previous years—if, say, everyone had been waiting to see these kinds of improvements before funding.
Really? That’s not obvious to me. Of course you’ve been around for all this and I haven’t, but here’s what I’m seeing from my vantage point...
Recent changes that cost very little:
Donor database
Strategic plan
Monthly progress reports
A list of research problems SI is working on (it took me 16 hours to write)
IntelligenceExplosion.com, Friendly-AI.com, AI Risk Bibliography 2012, annotated list of journals that may publish papers on AI risk, a partial history of AI risk research, and a list of forthcoming and desired articles on AI risk (each of these took me only 10-25 hours to create)
Detailed tracking of the expenses for major SI projects
Staff worklogs
Staff dinners (or something that brought staff together)
A few people keeping their eyes on SI’s funds so theft would be caught sooner
Optimization of Google Adwords
Stuff that costs less than some other things SI had spent money on, such as funding Ben Goertzel’s AGI research or renting downtown Berkeley apartments for the later visiting fellows:
Research papers
Management of staff and projects
Rachael Briggs’ TDT write-up
Best-practices bookkeeping/accounting
New website
LaTeX template for SI publications; references checked and then organized with BibTeX
SEO
Do you disagree with these estimates, or have I misunderstood what you’re claiming?
A lot of charities go through this pattern before they finally work out how to transition from a board-run/individual-run tax-deductible band of conspirators to being a professional staff-run organisation tuned to doing the particular thing they do. The changes required seem simple and obvious in hindsight, but it’s a common pattern for it to take years, so SIAI has been quite normal, or at the very least not been unusually dumb.
(My evidence is seeing this pattern close-up in the Wikimedia Foundation, Wikimedia UK (the first attempt at which died before managing it, the second making it through barely) and the West Australian Music Industry Association, and anecdotal evidence from others. Everyone involved always feels stupid at having taken years to achieve the retrospectively obvious. I would be surprised if this aspect of the dynamics of nonprofits had not been studied.)
edit: Luke’s recommendation of The Nonprofit Kit For Dummies looks like precisely the book all the examples I know of needed to have someone throw at them before they even thought of forming an organisation to do whatever it is they wanted to achieve.
Things that cost money:
Amy Willey
Luke Muehlhauser
Louie Helm
CfAR
trying things until something worked
I don’t think this response supports your claim that these improvements “would not and could not have happened without more funding than the level of previous years.”
I know your comment is very brief because you’re busy at minicamp, but I’ll reply to what you wrote, anyway: Someone of decent rationality doesn’t just “try things until something works.” Moreover, many of the things on the list of recent improvements don’t require an Amy, a Luke, or a Louie.
I don’t even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.
When I was made Executive Director and phoned our Advisors, most of them said “Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!”
That is the kind of thing that makes me want to say that SingInst has “tested every method except the method of trying.”
Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping… these are all literally from the Nonprofits for Dummies book.
Maybe these things weren’t done for 11 years because SI’s decision-makers did make good plans but failed to execute them due to the usual defeaters. But that’s not the history I’ve heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I’ve heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.
Money wasn’t the barrier to doing many of those things, it was a gap in general rationality.
I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.
At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn’t pretty. (And I’m not the only SIer who felt this way at the time.)
But now I do feel comfortable asking people to donate to SingInst. I’m excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.
Luke has just told me (personal conversation) that what he got from my comment was, “SIAI’s difficulties were just due to lack of funding” which was not what I was trying to say at all. What I was trying to convey was more like, “I didn’t have the ability to run this organization, and knew this—people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn’t succeed in doing so either—and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director”.
Does Luke disagree with this clarified point? I do not find a clear indicator in this conversation.
Update: I came out of a recent conversation with Eliezer with a higher opinion of Eliezer’s general rationality, because several things that had previously looked to me like unforced, forseeable mistakes by Eliezer now look to me more like non-mistakes or not-so-forseeable mistakes.
You’re allowed to say these things on the public Internet?
I just fell in love with SI.
Well, at our most recent board meeting I wasn’t fired, reprimanded, or even questioned for making these comments, so I guess I am. :)
Not even funny looks? ;)
It’s Luke you should have fallen in love with, since he is the one turning things around.
On the other hand I can count with one hand the number of established organisations I know of that would be sociologically capable of ceding power, status and control to Luke the way SingInst did. They took an untrained intern with essentially zero external status from past achievements and affiliations and basically decided to let him run the show (at least in terms of publicly visible initiatives). It is clearly the right thing for SingInst to do and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections—but still, making the appointment goes fundamentally against normal human behavior.
(Where I say “count with one hand” I am not including the use of any digits thereupon. I mean one.)
It doesn’t matter that I completely understand why this phrase was included, I still found it hilarious in a network sitcom sort of way.
Consider the implications in light of the HoldenKarnofsky’s critique about SI pretensions to high rationality.
Rationality is winning.
SI, at the same time as it was claiming extraordinary rationality, was behaving in ways that were blatantly irrational.
Although this is supposedly due to “the usual causes,” rationality (winning) subsumes overcoming akrasia.
HoldenKarnofsky is correct that SI made claims for its own extraordinary rationality at a time when its leaders weren’t rational.
Further: why should anyone give SI credibility today—when it stands convicted of self-serving misrepresentation in the recent past?
As a minor note, observe that claims of extraordinary rationality do not necessarily contradict claims of irrationality. The sanity waterline is very low.
Do you mean to imply in context here that the organizational management of SIAI at the time under discussion was above average for a nonprofit organization? Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality? I certainly agree with the latter.
Are you comparing it to the average among nonprofits started, or nonprofits extant? I would guess that it was well below average for extant nonprofits, but about or slightly above average for started nonprofits. I’d guess that most nonprofits are started by people who don’t know what they’re doing and don’t know what they don’t know, and that SI probably did slightly better because the people who were being a bit stupid were at least very smart, which can help. However, I’d guess that most such nonprofits don’t live long because they don’t find a Peter Thiel to keep them alive.
Your assessment looks about right to me. I have considerable experience of averagely-incompetent nonprofits, and SIAI looks normal to me. I am strongly tempted to grab that “For Dummies” book and, if it’s good, start sending copies to people …
In the context of thomblake’s comment, I suppose nonprofits started is the proper reference class.
I don’t see what’s the point to comparing to average nonprofits. Average for-profits don’t realize any profit, and average non-profits just waste money.
I would say SIAI is best paralleled to average started ‘research’ organization that is developing some free energy something, run by non-scientists, with some hired scientists as chaff.
Sadly, I agree. Unless you look at it very closely, SIAI pattern-matches to “crackpots trying to raise money to fund their crackpottiness” fairly well. (What saves them is that their ideas are a lot better than the average crackpot.)
Yes, this.
On an arbitrary scale I just made up, below 100 degrees of rationality is “irrational”, and 0 degrees of rationality is “ordinary”. 50 is extraordinarily rational and yet irrational.
50 while you’re thinking you’re at 100 is being an extraordinary loser (overconfidence leads to big failures)
In any case this is just word play. Holden seen many organizations that are/were more rational, that’s probably what he means by lack of extraordinary rationality.
You’ve misread the post—Luke is saying that he doesn’t think the “usual defeaters” are the most likely explanation.
Correct.
Just to let you know, you’ve just made it on my list of the very few LW regulars I no longer bother replying to, due to the proven futility of any communications. In your case it is because you have a very evident ax to grind, which is incompatible with rational thought.
This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn’t he have an ax to grind? More of one really, since this ax chops trees of gold.
It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.
From http://www.usingenglish.com: “If you have an axe to grind with someone or about something, you have a grievance, a resentment and you want to get revenge or sort it out.” One can hardly call the unacknowledged emotions of resentment and needing a revenge/retribution compatible with rationality. srdiamond piled a bunch of (partially correct but irrelevant in the context of my comment) negative statements about SI, making these emotions quite clear.
That’s a restrictive definition of “ax to grind,” by the way—it’s normally used to mean any special interest in the subject: “an ulterior often selfish underlying purpose ” (Merriam-Webster’s Collegiate Dictionary)
But I might as well accept your meaning for discussion purposes. If you detect unacknowledged resentment in srdiamond, don’t you detect unacknowledged ambition in Eliezer Yudkowsky?
There’s actually good reason for the broader meaning of “ax to grind.” Any special stake is a bias. I don’t think you can say that someone who you think acts out of resentment, like srdiamond, is more intractably biased than someone who acts out of other forms of narrow self-interest, which almost invariably applies when someone defends something he gets money from.
I don’t think it’s a rational method to treat people differently, as inherently less rational, when they seem resentful. It is only one of many difficult biases. Financial interest is probably more biasing. If you think the arguments are crummy, that’s something else. But the motive—resentment or finances—should probably have little bearing on how a message is treated in serious discussion.
Eliezer certainly has a lot of ambition, but I am surprised to see an accusation that this ambition is unacknowledged.
The impression I get from scanning their comment history is that metaphysicist means to suggest here that EY has ambitions he hasn’t acknowledged (e.g., the ambition to make money without conventional credentials), not that he fails to acknowledge any of the ambitions he has.
Thank you for this analysis, it made me think more about my motivations and their validity. I believe that my decision to permanently disengage from discussions with some people is based on the futility of such discussions in the past, not on the specific reasons they are futile. At some point I simply decide to cut my losses.
Indeed, present company not excluded. The question is whether it permanently prevents the ax-grinder from listening. EY, too, has his share of unacknowledged irrationalities, but both his status and his ability to listen and to provide insights makes engaging him in a discussion a rewarding, if sometimes frustrating experience.
I don’t not know why srdiamond’s need to bash SI is so entrenched, and whether it can be remedied to a degree where he is once again worth talking to, so at this point it is instrumentally rational for me to avoid replying to him.
Well, all we really know is that he chose to. It may be that everyone he works with then privately berated him for it.
That said, I share your sentiment.
Actually, if SI generally endorses this sort of public “airing of dirty laundry,” I encourage others involved in the organization to say so out loud.
The largest concern from reading this isn’t really what it brings up in management context, but what it says about the SI in general. Here an area where there’s real expertise and basic books that discuss well-understood methods and they didn’t do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?
(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there’s lots of evidence available as to how effective they are.
Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as ‘self improvement’ via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone’s contemporary paperclip maximizer? By how much?
Incredibly relevant to AI risk, but analysis can’t be faked without really having technical expertise.
I doubt there’s all that much of a correlation between these things to be honest.
This makes me wonder… What “for dummies” books should I be using as checklists right now? Time to set a 5-minute timer and think about it.
What did you come up with?
I haven’t actually found the right books yet, but these are the things where I decided I should find some “for beginners” text. the important insight is that I’m allowed to use these books as skill/practice/task checklists or catalogues, rather than ever reading them all straight through.
General interest:
Career
Networking
Time management
Fitness
For my own particular professional situation, skills, and interests:
Risk management
Finance
Computer programming
SAS
Finance careers
Career change
Web programming
Research/science careers
Math careers
Appraising
Real Estate
UNIX
For fitness, I’d found Liam Rosen’s FAQ (the ‘sticky’ from 4chan’s /fit/ board) to be remarkably helpful and information-dense. (Mainly, ‘toning’ doesn’t mean anything, and you should probably be lifting heavier weights in a linear progression, but it’s short enough to be worth actually reading through.)
The For Dummies series is generally very good indeed. Yes.
\
I remember that, when Anna was managing the fellows program, she was reading books of the “for dummies” genre and trying to apply them… it’s just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were “what it takes to manage well” (i.e. “basic management”) and “what it takes to be productive”, rather than “what it takes to (help) operate a nonprofit according to best practices”. So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn’t really any cognitive space left over to effectively notice the possibility that those wouldn’t be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen’s skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)
I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI’s current strategies with them and listened to their suggestions. But I don’t know how much she went out of her way to find people she didn’t already have reasonably reliable positive contact with, to get advice from them.
I don’t know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the “everyone outside’s psychological barriers” side of that, he was at least successful enough to keep SIAI’s public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don’t have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn’t one of those things though.
But the proper approach to retrospective judgement is generally a confusing question.
The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn’t be bigger than those of the other fires they were trying to put out.
There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn’t on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options—how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.
There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time… well, yeah, that didn’t happen.
I agree with a paraphrase of John Maxwell’s characterization: “I’d rather hear Eliezer say ‘thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be “]organizational best practices[“]’, because this seems like a better depiction of what actually happened.” Note that this was most of the purpose of the Fellows program in the first place—to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.
FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you’re just imagining this retroactively given that that’s what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was “too competent” and that I should go do something more useful with my talent, like start another business… not “waste my time working directly at SI.”
Seems like a fair paraphrase.
This inspired me to make a blog post: You need to read Nonprofit Kit for Dummies.
… which Eliezer has read and responded to, noting he did indeed read just that book in 2000 when he was founding SIAI. This suggests having someone of Luke’s remarkable drive was in fact the missing piece of the puzzle.
Fascinating! I want to ask “well, why didn’t it take then?”, but if I were in Eliezer’s shoes I’d be finding this discussion almost unendurably painful right now, and it feels like what matters has already been established. And of course he’s never been the person in charge of that sort of thing, so maybe he’s not who we should be grilling anyway.
Obviously we need How to be Lukeprog for Dummies. Luke appears to have written many fragments for this, of course.
Beating oneself up with hindsight bias is IME quite normal in this sort of circumstance, but not actually productive. Grilling the people who failed makes it too easy to blame them personally, when it’s a pattern I’ve seen lots and lots, suggesting the problem is not a personal failing.
Agreed entirely—it’s definitely not a mark of a personal failing. What I’m curious about is how we can all learn to do better at the crucial rationalist skill of making use of the standard advice about prosaic tasks—which is manifestly a non-trivial skill.
The Bloody Obvious For Dummies. If only common sense were!
From the inside (of a subcompetent charity—and I must note, subcompetent charities know they’re subcompetent), it feels like there’s all this stuff you’re supposed to magically know about, and lots of “shut up and do the impossible” moments. And you do the small very hard things, in a sheer tour de force of remarkable effort. But it leads to burnout. Until the organisation makes it to competence and the correct paths are retrospectively obvious.
That actually reads to me like descriptions I’ve seen of the startup process.
The problem is that there are two efficiencies/competences here, the efficiency as in doing the accounting correctly, which is relatively easy in comparison to the second: the efficiency as in actually doing relevant novel technical work that matters. The former you could get advice from some books, the latter you won’t get any advice on, it’s a harder problem, and typical level of performance is exactly zero (even for those who get the first part right). The difference in difficulties is larger than that between building a robot kit by following instructions vs designing a ground breaking new robot and making a billion dollars off it.
The best advice to vast majority of startups is: dissolve startup and get normal jobs, starting tomorrow. The best advice to all is to take a very good look at themselves knowing that the most likely conclusion should be “dissolve and get normal jobs”. The failed startups I’ve seen so far were propelled by pure, unfounded belief in themselves (like in a movie where someone doesn’t want to jump, other says yes you can do that!! then that person jumps, but rather than sending positive message and jumping over and surviving, falls down to instant death, while the fire that the person was running away from just goes out). The successful startups, on the other hand, had very well founded belief in themselves (good track record, attainable goals), or started from a hobby project that gone successful.
Judging from the success rate that VCs have at predicting successful startups, I conclude that the “pure unfounded belief on the one hand, well-founded belief on the other” metric is not easily applied to real organizations by real observers.
Mm. This is why an incompetent nonprofit can linger for years: no-one is doing what they do, so they feel they still have to exist, even though they’re not achieving much, and would have died already as a for-profit business. I am now suspecting that the hard part for a nonprofit is something along the lines of working out what the hell you should be doing to achieve your goal. (I would be amazed if there were not extensive written-up research in this area, though I don’t know what it is.)
That book looks like the basic solution to the pattern I outline here, and from your description, most people who have any public good they want to achieve should read it around the time they think of getting a second person involved.
Donald Rumsfeld
...this was actually a terrible policy in historical practice.
That only seems relevant if the war in question is optional.
Rumsfeld is speaking of the Iraq war. It was an optional war, the army turned out to be far understrength for establishing order, and they deliberately threw out the careful plans for preserving e.g. Iraqi museums from looting that had been drawn up by the State Department, due to interdepartmental rivalry.
This doesn’t prove the advice is bad, but at the very least, Rumsfeld was just spouting off Deep Wisdom that he did not benefit from spouting; one would wish to see it spoken by someone who actually benefited from the advice, rather than someone who wilfully and wantonly underprepared for an actual war.
Indeed. The proper response, which is surely worth contemplation, would have been:
Sun Tzu
This is a circular definition, not an advice.
I would naively read it as “don’t start a fight unless you know you’re going to win”.
If you read it literally. I think Sun Tzu is talking about the benefit of planning.
I’m guessing that something got lost in translation,
In context: http://suntzusaid.com/book/4
I think the quote is an alternative translation of paragraph 15 in the link above:
“Thus it is that in war the victorious strategist only seeks battle after the victory has been won, whereas he who is destined to defeat first fights and afterwards looks for victory.”
It has an associated commentary:
Ho Shih thus expounds the paradox: “In warfare, first lay plans which will ensure victory, and then lead your army to battle; if you will not begin with stratagem but rely on brute strength alone, victory will no longer be assured.”
I don’t see the circularity.
Just because a warrior is victorious doesn’t necessarily mean they won before going to war; it might be instead that victorious warriors go to war first and then seek to win, and defeated warriors do the same thing.
Can you spell out the circularity?
Unless you interpret “win first” as “prepare for every eventuality, calculate the unbiased probability of winning and be comfortable with the odds when going to battle”, “win first” can only be meaningfully applied in retrospect.
I think you’ve stumbled upon the correct interpretation.
Sun Tzu was fond of making warfare about strategy and logistics rather than battles, so that one would only fight when victory is a foregone conclusion.
Ah, I see what you mean now.
Thanks for the clarification.
Given the several year lag between funding increases and the listed improvements, it appears that this was less a result of a prepared plan and more a process of underutilized resources attracting a mix of parasites (the theft) and talent (hopefully the more recent staff additions).
Which goes towards a critical question in terms of future funding: is SIAI primarily constrained in its mission by resources or competence?
Of course, the related question is: what is SIAI’s mission? Someone donating primarily for AGI research might not count recent efforts (LW, rationality camps, etc) as improvements.
What should a potential donor expect from money invested into this organization going forward? Internally, what are your metrics for evaluation?
Edited to add: I think that the spin-off of the rationality efforts is a good step towards answering these questions.
This seems like a rather absolute statement. Knowing Luke, I’ll bet he would’ve gotten some of it done even on a limited budget.
Luke and Louie Helm are both on paid staff.
I’m pretty sure their combined salaries are lower than the cost of the summer fellows program that SI was sponsoring four or five years ago. Also, if you accept my assertion that Luke could find a way to do it on a limited budget, why couldn’t somebody else?
Givewell is interested in finding charities that translate good intentions into good results. This requires that the employees of the charity have low akrasia, desire to learn about and implement organizational best practices, not suffer from dysrationalia, etc. I imagine that from Givewell’s perspective, it counts as a strike against the charity if some of the charity’s employees have a history of failing at any of these.
I’d rather hear Eliezer say “thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and care about organizational best practices”, because this seems like a better depiction of what actually happened. I don’t get the impression SI was actively looking for folks like Louie and Luke.
Yes to this. Eliezer’s claim about the need for funding may suffer many of Luke’s criticisms above. But usually the most important thing you need is talent and that does require funding.
My hope is that the upcoming deluge of publications will answer this objection, but for the moment, I am unclear as to the justification for the level of resources being given to SIAI researchers.
This level of freedom is the dream of every researcher on the planet. Yet, it’s unclear why these resources should be devoted to your projects. While I strongly believe that the current academic system is broken, you are asking for a level of support granted to top researchers prior to have made any original breakthroughs yourself.
If you can convince people to give you that money, wonderful. But until you have made at least some serious advancement to demonstrate your case, donating seems like an act of faith.
It’s impressive that you all have found a way to hack the system and get paid to develop yourselves as researchers outside of the academic system and I will be delighted to see that development bear fruit over the coming years. But, at present, I don’t see evidence that the work being done justifies or requires that support.
Because some people like my earlier papers and think I’m writing papers on the most important topic in the world?
Note that this isn’t uncommon. SI is far from the only think tank with researchers who publish in academic journals. Researchers at private companies do the same.
First, let me say that, after re-reading, I think that my previous post came off as condescending/confrontational which was not my intent. I apologize.
Second, after thinking about this for a few minutes, I realized that some of the reason your papers seem so fluffy to me is that they argue what I consider to be obvious points. In my mind, of course we are likely “to develop human-level AI before 2100.” Because of that, I may have tended to classify your work as outreach more than research.
But outreach is valuable. And, so that we can factor out the question of the independent contribution of your research, having people associated with SIAI with the publications/credibility to be treated as experts has gigantic benefits in terms of media multipliers (being the people who get called on for interviews, panels, etc). So, given that, I can see a strong argument for publication support being valuable to the overall organization goals regardless of any assessment of the value of the research.
My only point was that, in those situations, usually researchers are brought in with prior recognized achievements (or, unfortunately all too often, simply paper credentials). SIAI is bringing in people who are intelligent but unproven and giving them the resources reserved for top talent in academia or industry. As you’ve pointed out, one of the differences with SIAI is the lack of hoops to jump through.
Edit: I see you commented below that you view your own work as summarization of existing research and we agree on the value of that. Sorry that my slow typing speed left me behind the flow of the thread.
It’s true at my company, at least. There are quite a few papers out there authored by the researchers at the company where I work. There are several good business reasons for a company to invest time into publishing a paper; positive PR is one of them.
But then you put your intellect at issue, and I think I’m entitled to opine that you lack the qualities of intellect that would make such recommendation credible. You’re a budding scholar; a textbook writer at heart. You lack any of the originality of a thinker.
You confirm the lead poster’s allegations that SIA staff are insular and conceited.
Of course you are. And, you may not be one of the people who “like my earlier papers.”
Really? How? I commented earlier on LW (can’t find it now) about how the kind of papers I write barely count as “original research” because for the most part they merely summarize and clarify the ideas of others. But as Beckstead says, there is a strong need for that right now.
For insights in decision theory and FAI theory, I suspect we’ll have to look to somebody besides Luke Muehlhauser. We keep trying to hire such people but they keep saying “No.” (I got two more “no”s just in the last 3 weeks.) Part of that may be due to the past and current state of the organization — and luckily, fixing that kind of thing is something I seem to have some skills with.
True, dat.
Isn’t this very strong evidence in support for Holden’s point about “Apparent poorly grounded belief in SI’s superior general rationality” (excluding Luke, at least)? And especially this?
This topic is something I’ve been thinking about lately. Do SIers tend to have superior general rationality, or do we merely escape a few particular biases? Are we good at rationality, or just good at “far mode” rationality (aka philosophy)? Are we good at epistemic but not instrumental rationality? (Keep in mind, though, that rationality is only a ceteris paribus predictor of success.)
Or, pick a more specific comparison. Do SIers tend to be better at general rationality than someone who can keep a small business running for 5 years? Maybe the tight feedback loops of running a small business are better rationality training than “debiasing interventions” can hope to be.
Of course, different people are more or less rational in different domains, at different times, in different environments.
This isn’t an idle question about labels. My estimate of the scope and level of people’s rationality in part determines how much I update from their stated opinion on something. How much evidence for Hypothesis X (about organizational development) is it when Eliezer gives me his opinion on the matter, as opposed to when Louie gives me his opinion on the matter? When Person B proposes to take on a totally new kind of project, I think their general rationality is a predictor of success — so, what is their level of general rationality?
Holden implies (and I agree with him) that there’s very little evidence at the moment to suggest that SI is good at instrumental rationality. As for epistemic rationality, how would we know ? Is there some objective way to measure it ? I personally happen to believe that if a person seems to take it as a given that he’s great at epistemic rationality, this fact should count as evidence (however circumstantial) against him being great at epistemic rationality… but that’s just me.
If you accept that your estimate of someone’s “rationality” should depend on the domain, the environment, the time, the context, etc… and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc… it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.
That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development—ideally, someone who has been successful at developing organizations—and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence… and their domain competence is easier to measure than their general rationality.
So is their general rationality worth devoting resources to determining?
It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it’s good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you’d get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).
I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn’t just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.
So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups’ expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other’s domain) are too expensive to be worth it. (Well, assuming the obstacle isn’t that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI’s local potentially relevant trivia simply isn’t practical.)
Yes?
Yeah, that can be a problem.
In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there’s convergence, great. If there’s divergence, iterate.
This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.
Yes to all this.
In the world in which a varied group of intelligent and especially rational people are organizing to literally save humanity, I don’t see the relatively trivial, but important, improvements you’ve made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you’ve made.
I mean, the question this group should be asking themselves is “how can we best alter the future so as to navigate towards FAI?” So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could’ve been using it to improve the foundation of their cause from which everything else follows?
(Granted, I don’t know the history and inner workings of the SI, and so I could be missing some very significant and immovable hurdles, but I don’t see that as very likely; at least, not as likely as Holden’s scenario.)
I don’t know what these sentences mean.
Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts. Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.
People are more rational in different domains, environments, and so on.
The people at SI may have poor instrumental rationality while being adept at epistemic rationality.
Being rational doesn’t necessarily mean being successful.
I accept all those points, and yet I still see the Singularity Institute having made the improvements that you’ve made since being hired before you were hired if they have superior general rationality. That is, you wouldn’t have that list of relatively trivial things to brag about because someone else would have recognized the items on that list as important and got them done somehow (ignore any negative connotations—they’re not intended).
For instance, I don’t see a varied group of people with superior general rationality not discovering or just not outsourcing work they don’t have a comparative advantage in (i.e., what you’ve done). That doesn’t look like just a failure in instrumental rationality, or just rationality operating on a different kind of utility function, or just a lack of domain specific knowledge.
The excuses available to a person acting in a way that’s non-traditionally rational are less convincing when you apply them to a group.
No, I get that. But that still doesn’t explain away the higher salaries like EY’s 80k/year and its past upwards trend. I mean, these higher paid people are the most committed to the cause, right? I don’t see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being superior in general rationality. It’s like a homeless person desperately in want of shelter trying save enough for an apartment and yet buying meals at some restaurant.
That’s the point I was making, why wasn’t that done earlier? How did these people apparently miss out on opportunity cost? (And I’m just using outsourcing as an example because it was one of the most glaring changes you made that I think should have probably been made much earlier.)
Right, I think we’re saying the same thing, here: the availability of so much low-hanging fruit in organizational development as late as Sept. 2011 is some evidence against the general rationality of SIers. Eliezer seems to want to say it was all a matter of funding, but that doesn’t make sense to me.
Now, on this:
For some reason I’m having a hard time parsing your sentences for unambiguous meaning, but if I may attempt to rephrase: “SIers wouldn’t take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things.” Is that what you’re saying?
I’ve heard the Bay Area is expensive, and previously pointed out that Eliezer earns more than I do, despite me being in the top 10 SI donors.
I don’t mind, though, as has been pointed out, even thinking about muffins might be a question invoking existential risk calculations.
...and much beloved for it.
Yes, the Bay Area is expensive. We’ve considered relocating, but on the other hand the (by far) best two places for meeting our needs in HR and in physically meeting with VIPs are SF and NYC, and if anything NYC is more expensive than the Bay Area. We cut living expenses where we can: most of us are just renting individual rooms.
Also, of course, it’s not like the Board could decide we should relocate to a charter city in Honduras and then all our staff would be able to just up and relocate. :)
(Rain may know all this; I’m posting it for others’ benefit.)
I think it’s crucial that SI stay in the Bay Area. Being in a high-status place signals that the cause is important. If you think you’re not taken seriously enough now, imagine if you were in Honduras…
Not to mention that HR is without doubt the single most important asset for SI. (Which is why it would probably be a good idea to pay more than the minimum cost of living.)
Out of curiosity only: what were the most significant factors that led you to reject telepresence options?
FWIW, Wikimedia moved from Florida to San Francisco precisely for the immense value of being at the centre of things instead of the middle of nowhere (and yes, Tampa is the middle of nowhere for these purposes, even though it still has the primary data centre). Even paying local charity scale rather than commercial scale (there’s a sort of cycle where WMF hires brilliant kids, they do a few years working at charity scale then go to Facebook/Google/etc for gobs of cash), being in the centre of things gets them staff and contacts they just couldn’t get if they were still in Tampa. And yes, the question came up there pretty much the same as it’s coming up here: why be there instead of remote? Because so much comes with being where things are actually happening, even if it doesn’t look directly related to your mission (educational charity, AI research institute).
I didn’t know this, but I’m happy to hear it.
The charity is still registered in Florida but the office is in SF. I can’t find the discussion on a quick search, but all manner of places were under serious consideration—including the UK, which is a horrible choice for legal issues in so very many ways.
In our experience, monkeys don’t work that way. It sounds like it should work, and then it just… doesn’t. Of course we do lots of Skyping, but regular human contact turns out to be pretty important.
(nods) Yeah, that’s been my experience too, though I’ve often suspected that companies like Google probably have a lot of research on the subject lying around that might be informative.
Some friends of mine did some experimenting along these lines when doing distributed software development (in both senses) and were somewhat startled to realize that Dark Age of Camelot worked better for them as a professional conferencing tool than any of the professional conferencing tools their company had. They didn’t mention this to their management.
I am reminded that Flickr started as a photo add-on for an MMORPG...
-
Enough for you to agree with Holden on that point?
Yes, but I wouldn’t set a limit at a specific salary range; I’d expect them to give as much as they optimally could, because I assume they’re more concerned with the cause than the money. (re the 70k/yr mention: I’d be surprised if that was anywhere near optimal)
Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.
I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.
So, if you disagree with Holden, I assume you think SIers have superior general rationality: why?
And I’m confident SIers will score well on rationality tests, but that looks like specialized rationality. I.e., you can avoid a bias but you can’t avoid a failure in your achieving your goals. To me, the SI approach seems poorly leveraged. I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? YOU WANT TO WIN?! Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.
That isn’t as bad as I thinking it was; I don’t know if that’s optimal, but it seems at least reasonable.
I’ll avoid double-labor on this and wait to reply until my conversation with Holden is done.
Right. Exercise the neglected virtue of scholarship and all that.
It’s not that easy to dismiss; if it’s as poorly leveraged as it looks relative to other approaches then you have little reason to be spreading and teaching SI’s brand of specialized rationality (except for perhaps income).
I’m not dismissing it, I’m endorsing it and agreeing with you that it has been my approach ever since my first post on LW.
Weird, I have this perception of SI being heavily invested in overcoming biases and epistemic rationality training to the detriment of relevant domain specific knowledge, but I guess that’s wrong?
I’m lost again; I don’t know what you’re saying.
I wasn’t talking about you; I was talking about SI’s approach in spreading and training rationality. You(SI) have Yudkowsky writing books, you have rationality minicamps, you have lesswrong, you and others are writing rationality articles and researching the rationality literature, and so on.
That kind of rationality training, research, and message looks poorly leveraged in achieving your goals, is what I’m saying. Poorly leveraged for anyone trying to achieve goals. And at its most abstract, that’s what rationality is, right? Achieving your goals.
So, I don’t care if your approach was to acquire as much relevant knowledge as possible before dabbling in debiasing, bayes, and whatnot (i.e., prioritizing the most leveraged approach). I wondering why your approach doesn’t seem to be SI’s approach. I’m wondering why SI doesn’t prioritize rationality training, research, and message by whatever is the most leveraged in achieving SI’s goals. I’m wondering why SI doesn’t spread the virtue of scholarship to the detriment of training debiasing and so on.
SI wants to raise the sanity waterline, is what the SI doing even near optimal for that? Knowing what SIers knew and trained for couldn’t even get them to see an opportunity for trading in on opportunity cost for years; that is sad.
(Disclaimer: the following comment should not be taken to imply that I myself have concluded that SI staff salaries should be reduced.)
I’ll grant you that it’s pretty low relative to other Bay Area salaries. But as for the actual cost of living, I’m less sure.
I’m not fortunate enough to be a Bay Area resident myself, but here is what the internet tells me:
After taxes, a $48,000/yr gross salary in California equates to a net of around $3000/month.
A 1-bedroom apartment in Berkeley and nearby places can be rented for around $1500/month. (Presumably, this is the category of expense where most of the geography-dependent high cost of living is contained.)
If one assumes an average spending of $20/day on food (typically enough to have at least one of one’s daily meals at a restaurant), that comes out to about $600/month.
That leaves around $900/month for miscellaneous expenses, which seems pretty comfortable for a young person with no dependents.
So, if these numbers are right, it seems that this salary range is actually right about what the cost of living is. Of course, this calculation specifically does not include costs relating to signaling (via things such as choices of housing, clothing, transportation, etc.) that one has more money than necessary to live (and therefore isn’t low-status). Depending on the nature of their job, certain SI employees may need, or at least find it distinctly advantageous for their particular duties, to engage in such signaling.
Damn good for someone just out of college—without a degree!
The point is that we’re consequentialists, and lowering salaries even further would save money (on salaries) but result in SI getting less done, not more — for the same reason that outsourcing fewer tasks would save money (on outsourcing) but cause us to get less done, not more.
You say this as though it’s obvious, but if I’m not mistaken, salaries used to be about 40% of what they are now, and while the higher salaries sound like they are making a major productivity difference, hiring 2.5 times as many people would also make a major productivity difference. (Though yes, obviously marginal hires would be lower in quality.)
I don’t think salaries were ever as low as 40% of what they are now. When I came on board, most people were at $36k/yr.
To illustrate why lower salaries means less stuff gets done: I’ve been averaging 60 hours per week, and I’m unusually productive. If I am paid less, that means that (to pick just one example from this week) I can’t afford to take a taxi to and from the eye doctor, which means I spend 1.5 hrs each way changing buses to get there, and spend less time being productive on x-risk. That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.
Pretty sure Anna and Steve Rayhawk had salaries around $20k/yr at some point while living in Silicon Valley.
I don’t think that you’re really responding to Steven’s point. Yes, as Steven said, if you were paid less then clearly that would impose more costs on you, so ceteris paribus your getting paid less would be bad. But, as Steven said, the opportunity cost is potentially very high. You haven’t made a rationally compelling case that the missed opportunity is “totally not worth it” or that heeding it would be “profoundly stupid”, you’ve mostly just re-asserted your conclusion, contra Steven’s objection. What are your arguments that this is the case? Note that I personally think it’s highly plausible that $40-50k/yr is optimal, but as far as I can see you haven’t yet listed any rationally compelling reasons to think so.
(This comment is a little bit sterner than it would have been if you hadn’t emphatically asserted that conclusions other than your own would be “profoundly stupid” without first giving overwhelming justification for your conclusion. It is especially important to be careful about such apparent overconfidence on issues where one clearly has a personal stake in the matter.)
I will largely endorse Will’s comment, then bow out of the discussion, because this appears to be too personal and touchy a topic for a detailed discussion to be fruitful.
If so, I suspect they were burning through savings during this time or had some kind of cheap living arrangement that I don’t have.
I couldn’t really get by on less, so paying me less would cause me to quit the organization and do something else instead, which would cause much of this good stuff to probably not happen.
It’s VERY hard for SingInst to purchase value as efficiently as by purchasing Luke-hours. At $48k/yr for 60 hrs/wk, I make $15.38/hr, and one Luke-hour is unusually productive for SingInst. Paying me less and thereby causing me to work fewer hours per week is a bad value proposition for SingInst.
Or, as Eliezer put it:
This seems to me unnecessarily defensive. I support the goals of SingInst, but I could never bring myself to accept the kind of salary cut you guys are taking in order to work there. Like every other human on the planet, I can’t be accurately modelled with a utility function that places any value on far distant strangers; you can more accurately model what stranger-altruism I do show as purchase of moral satisfaction, though I do seek for such altruism to be efficient. SingInst should pay the salaries it needs to pay to recruit the kind of staff it needs to fulfil its mission; it’s harder to recruit if staff are expected to be defensive about demanding market salaries for their expertise, with no more than a normal adjustment for altruistic work much as if they were working for an animal sanctuary.
Yes, exactly.
So when I say “unnecessarily defensive”, I mean that all the stuff about the cost of taxis is after-the-fact defensive rationalization; it can’t be said about a single dollar you spend on having a life outside of SI. The truth is that even the best human rationalist in the world isn’t going to agree to giving those up, and since you have to recruit humans, you’d best pay the sort of salary that is going to attract and retain them. That of course includes yourself.
The same goes for saying “move to the Honduras”. Your perfectly utility-maximising AGIs will move to the Honduras, but your human staff won’t; they want to live in places like the Bay Area.
You know that the Bay Area is freakin’ expensive, right?
Re-reading, the whole thing is pretty unclear!
As katydee and thomblake say, I mean that working for SingInst would mean a bigger reduction in my salary than I could currently bring myself to accept. If I really valued the lives of strangers as a utilitarian, the benefits to them of taking a salary cut would be so huge that it would totally outweigh the costs to me. But it looks like I only really place direct value on the short-term interests of myself and those close to me, and everything else is purchase of moral satisfaction. Happily, purchase of moral satisfaction can still save the world if it is done efficiently.
Since the labour pool contains only human beings, with no true altruistic utility maximizers, SingInst should hire and pay accordingly; the market shows that people will accept a lower salary for a job that directly does good, but not a vastly lower salary. It would increase SI-utility if Luke accepted a lower salary, but it wouldn’t increase Luke-utility, and driving Luke away would cost a lot of SI-utility, so calling for it is in the end a cheap shot and a bad recommendation.
I live in London, which is also freaking expensive—but so are all the places I want to live. There’s a reason people are prepared to pay more to live in these places.
Hmm… Perhaps you don’t know that “salary cut” above means taking much less money?
I had missed the word cut. Damn it, I shouldn’t be commenting while sleep-deprived!
Indeed. I guess “taking a cut” can sometimes mean “taking some of the money”, so you could interpret this as meaning “I couldn’t accept all that money”, which as you say is the opposite of what I meant!
So why not relocate SIAI somewhere with a more reasonable cost of living?
I think the standard answer is that the networking and tech industry connections available in the Bay Area are useful enough to SIAI to justify the high costs of operating there.
[comment deleted]
Perhaps that’s why he’s saying he wouldn’t be willing to live there on a low salary?
I understand the point you’re making regarding salaries, and for once I agree.
However, it’s rather presumptuous of you (and/or Eliezer) to assume, implicitly, that our choices are limited to only two possibilities: “Support SIAI, save the world”, and “Don’t support SIAI, the world is doomed”. I can envision many other scenarios, such as “Support SIAI, but their fears were overblown and you implicitly killed N children by not spending the money on them instead”, or “Don’t support SIAI, support some other organization instead because they’ll have a better chance of success”, etc.
Where did we say all that?
In your comment above, you said:
You also quoted Eliezer saying something similar.
This outlook implies strongly that whatever SIAI is doing is of such monumental significance that future civilizations will not only remember its name, but also reverently preserve every decision it made. You are also quite fond of saying that the work that SIAI is doing is tantamount to “saving the world”; and IIRC Eliezer once said that, if you have a talent for investment banking, you should make as much money as possible and then donate it all to SIAI, as opposed to any other charity.
This kind of grand rhetoric presupposes not only that the SIAI is correct in its risk assessment regarding AGI, but also that they are uniquely qualified to address this potentially world-ending problem, and that, over the ages, no one more qualified could possibly come along. All of this could be true, but it’s far from a certainty, as your writing would seem to imply.
I’m not seeing how the above implies the thing you said:
(Note that I don’t necessarily endorse things you report Eliezer as having said.)
You appear to be very confident that future civilizations will remember SIAI in a positive way, and care about its actions. If so, they must have some reason for doing so. Any reason would do, but the most likely reason is that SIAI will accomplish something so spectacularly beneficial that it will affect everyone in the far future. SIAI’s core mission is to save the world from UFAI, so it’s reasonable to assume that this is the highly beneficial effect that the SIAI will achieve.
I don’t have a problem with this chain of events, just with your apparent confidence that a). it’s going to happen in exactly that way, and b). your organization is the only one who is qualified to save the world in this specific fashion.
(EDIT: I forgot to say that, if we follow your reasoning to its conclusion, then you are indeed implying that donating as much money or labor as possible to SIAI is the only smart move for any rational agent.)
Note that I have no problem with your main statement, i.e. “lowering the salaries of SIAI members would bring us too much negative utility to compensate for the monetary savings”. This kind of cost-benefit analysis is done all the time, and future civilizations rarely enter into it.
Well no, of course it’s not a certainty. All efforts to make a difference are decisions under uncertainty. You’re attacking a straw man.
Please substitute “certainty minus epsilon” for “certainty” wherever you see it in my post. It was not my intention to imply 100% certainty; just a confidence value so high that it amounts to the same thing for all practical purposes.
I don’t think “certainty minus epsilon” improves much. It moves it from theoretical impossibility to practical—but looking that far out, I expect “likelihood” might be best.
I don’t understand your comment… what’s the practical difference between “extremely high likelihood” and “extremely high certainty” ?
And where do SI claim even that? Obviously some of their discussions are implicitly conditioned on the fundamental assumptions behind their mission being true, but that doesn’t mean that they have extremely high confidence in those assumptions.
In the SIA/Transhumanist outlook, if civilization survives some large (perhaps majority) of extant human minds will survive as uploads. As a result, all of their memories will likely be stored, dissected, shared, searched, judged, and so on. Much will be preserved in such a future. And even without uploading, there are plenty of people who have maintained websites since the early days of the internet with no loss of information, and this is quite likely to remain true far into the future if civilization survives.
“1. I couldn’t really get by on less”
It is called a budget, son.
Plenty of people make less than you and work harder than you. Look in every major city and you will find plenty of people that fit this category, both in business and labor.
“That is totally not worth it. Future civilizations would look back on this decision as profoundly stupid.”
Elitism plus demanding that you don’t have to budget. Seems that you need to work more and focus less on how “awesome” you are.
You make good contributions...but let’s not get carried away.
If you really cared about future risk you would be working away at the problem even with a smaller salary. Focus on your work.
What we really need is some kind of emotionless robot who doesn’t care about its own standard of living and who can do lots of research and run organizations and suchlike without all the pesky problems introduced by “being human”.
Oh, wait...
Downvoted for this; Rain’s reply to the parent goes for me too.
That’s not actually that good, I don’t think—I go to a good college, and I know many people who are graduating to 60k-80k+ jobs with recruitment bonuses, opportunities for swift advancement, etc. Some of the best people I know could literally drop out now (three or four weeks prior to graduation) and immediately begin making six figures.
SIAI wages certainly seem fairly low to me relative to the quality of the people they are seeking to attract, though I think there are other benefits to working for them that cause the organization to attract skillful people regardless.
A Dilbert comic said it.
Ouch. I’d like to think that the side benefits for working for SIAI outweigh the side benefits for working for whatever soulless corporation Dilbert’s workplace embodies, though there is certainly a difference between side benefits and actual monetary compensation.
I graduated ~5 years ago with a engineering degree from a first tier University and I would have consider those starting salaries to be low to decent and not high. This is especially true in places with high cost of living like the bay area.
Having a good internship durring college often ment starting out at 60k/yr if not higher.
If this is significantly different for engineers exiting first tier University now it would be interesting to know.
To summarize and rephrase: in a “counterfactual” world where SI was actually rational, they would have found all these solutions and done all these things long ago.
Many of your sentences are confusing because you repeatedly use the locution “I see X”/ “I don’t see X” in a nonstandard way, apparently to mean “X would have happened” /”X would not have happened”.
This is not the way that phrase is usually understood. Normally, “I see X” is taken to mean either “I observe X” or “I predict X”. For example I might say (if I were so inclined):
meaning that I believe (from my observation) they are in fact being rational. Or, I might say:
meaning that I don’t predict that will happen. But I would not generally say:
if what I mean is “these people should/would not have taken a higher salary [if such-and-such were true]”.
Oh, I see ;) Thanks. I’ll definitely act on your comment, but I was using “I see X” as “I predict X”—just in the context of a possible world. E.g., I predict in the possible world in which SIers are superior in general rationality and committed to their cause, Luke wouldn’t have that list of accomplishments. Or, “yet I still see the Singularity Institute having made the improvements...”
I now see that I’ve been using ‘see’ as syntactic sugar for counterfactual talk… but no more!
To get away with this, you really need, at minimum, an explicit counterfactual clause (“if”, “unless”, etc.) to introduce it: “In a world where SIers are superior in general rationality, I don’t see Luke having that list of accomplishments.”
The problem was not so much that your usage itself was logically inconceivable, but rather that it collided with the other interpretations of “I see X” in the particular contexts in which it occurred. E.g. “I don’t see them taking higher salaries” sounded like you were saying that they weren’t taking higher salaries. (There was an “if” clause, but it came way too late!)
Have you considered the possibility that even higher salaries might raise productivity further?
I think we should search systematically for ways to convert money into increased productivity.
By what measure do you figure that?
That might be informative if we knew anything about your budget, but without any sort of context it sounds purely obfuscatory. (Also, your bank account is pretty close to my annual salary, so you might want to consider what you’re actually signalling here and to whom.)
I found this complaint insufficiently detailed and not well worded.
Average people think their rationality is moderately good. Average people are not very rational. SI affiliated people think they are adept or at least adequate at rationality. SI affiliated people are not complete disasters at rationality.
SI affiliated people are vastly superior to others in generally rationality. So the original complaint literally interpreted is false.
An interesting question might be on the level of: “Do SI affiliates have rationality superior to what the average person falsely believes his or her rationality is?”
Holden’s complaints each have their apparent legitimacy change differently under his and my beliefs. Some have to do with overconfidence or incorrect self-assessment, others with other-assessment, others with comparing SI people to others. Some of them:
Largely agree, as this relates to overconfidence.
Moderately disagree, as this relies on the rationality of others.
Largely disagree, as this relies significantly on the competence of others.
Largely agree, as this depends more on accurate assessment of one’s on rationality.
There is instrumental value in falsely believing others to have a good basis for disagreement so one’s search for reasons one might be wrong is enhanced. This is aside from the actual reasons of others.
It is easy to imagine an expert in a relevant field objecting to SI based on something SI does or says seeming wrong, only to have the expert couch the objection in literally false terms, perhaps ones that flow from motivated cognition and bear no trace of the real, relevant reason for the objection. This could be followed by SI’s evaluation and dismissal of it and failure of a type not actually predicted by the expert...all such nuances are lost in the literally false “Apparent poorly grounded belief in SI’s superior general rationality.”
Such a failure comes to mind and is easy for me to imagine as I think this is a major reason why “Lack of impressive endorsements” is a problem. The reasons provided by experts for disagreeing with SI on particular issues are often terrible, but such expressions are merely what they believe their objections to be, and their expertise is in math or some such, not in knowing why they think what they think.
As a supporter and donor to SI since 2006, I can say that I had a lot of specific criticisms of the way that the organization was managed. The points Luke lists above were among them. I was surprised that on many occasions management did not realize the obvious problems and fix them.
But the current management is now recognizing many of these points and resolving them one by one, as Luke says. If this continues, SI’s future looks good.
Why did you start referring to yourself in the first person and then change your mind? (Or am I missing something?)
Brain fart: now fixed.
(Why was this downvoted? If it’s because the downvoter wants to see fewer brain farts, they’re doing it wrong, because the message such a downvote actually conveys is that they want to see fewer acknowledgements of brain farts. Upvoted back to 0, anyway.)
The ‘example’ link is dead.
Fixed.
The things posted here are not impressive enough to make me more likely to donate to SIAI and I doubt they appear so for others on this site, especially the many lurkers/infrequent posters here.