I don’t think this response supports your claim that these improvements “would not and could not have happened without more funding than the level of previous years.”
I know your comment is very brief because you’re busy at minicamp, but I’ll reply to what you wrote, anyway: Someone of decent rationality doesn’t just “try things until something works.” Moreover, many of the things on the list of recent improvements don’t require an Amy, a Luke, or a Louie.
I don’t even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.
When I was made Executive Director and phoned our Advisors, most of them said “Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!”
That is the kind of thing that makes me want to say that SingInst has “tested every method except the method of trying.”
Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping… these are all literally from the Nonprofits for Dummies book.
Maybe these things weren’t done for 11 years because SI’s decision-makers did make good plans but failed to execute them due to the usual defeaters. But that’s not the history I’ve heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I’ve heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.
Money wasn’t the barrier to doing many of those things, it was a gap in general rationality.
I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.
At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn’t pretty. (And I’m not the only SIer who felt this way at the time.)
But now I do feel comfortable asking people to donate to SingInst. I’m excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.
Luke has just told me (personal conversation) that what he got from my comment was, “SIAI’s difficulties were just due to lack of funding” which was not what I was trying to say at all. What I was trying to convey was more like, “I didn’t have the ability to run this organization, and knew this—people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn’t succeed in doing so either—and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director”.
Update: I came out of a recent conversation with Eliezer with a higher opinion of Eliezer’s general rationality, because several things that had previously looked to me like unforced, forseeable mistakes by Eliezer now look to me more like non-mistakes or not-so-forseeable mistakes.
It’s Luke you should have fallen in love with, since he is the one turning things around.
On the other hand I can count with one hand the number of established organisations I know of that would be sociologically capable of ceding power, status and control to Luke the way SingInst did. They took an untrained intern with essentially zero external status from past achievements and affiliations and basically decided to let him run the show (at least in terms of publicly visible initiatives). It is clearly the right thing for SingInst to do and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections—but still, making the appointment goes fundamentally against normal human behavior.
(Where I say “count with one hand” I am not including the use of any digits thereupon. I mean one.)
As a minor note, observe that claims of extraordinary rationality do not necessarily contradict claims of irrationality. The sanity waterline is very low.
Do you mean to imply in context here that the organizational management of SIAI at the time under discussion was above average for a nonprofit organization? Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality? I certainly agree with the latter.
Are you comparing it to the average among nonprofits started, or nonprofits extant? I would guess that it was well below average for extant nonprofits, but about or slightly above average for started nonprofits. I’d guess that most nonprofits are started by people who don’t know what they’re doing and don’t know what they don’t know, and that SI probably did slightly better because the people who were being a bit stupid were at least very smart, which can help. However, I’d guess that most such nonprofits don’t live long because they don’t find a Peter Thiel to keep them alive.
Your assessment looks about right to me. I have considerable experience of averagely-incompetent nonprofits, and SIAI looks normal to me. I am strongly tempted to grab that “For Dummies” book and, if it’s good, start sending copies to people …
I don’t see what’s the point to comparing to average nonprofits. Average for-profits don’t realize any profit, and average non-profits just waste money.
I would say SIAI is best paralleled to average started ‘research’ organization that is developing some free energy something, run by non-scientists, with some hired scientists as chaff.
Sadly, I agree. Unless you look at it very closely, SIAI pattern-matches to “crackpots trying to raise money to fund their crackpottiness” fairly well. (What saves them is that their ideas are a lot better than the average crackpot.)
Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality?
Yes, this.
On an arbitrary scale I just made up, below 100 degrees of rationality is “irrational”, and 0 degrees of rationality is “ordinary”. 50 is extraordinarily rational and yet irrational.
50 while you’re thinking you’re at 100 is being an extraordinary loser (overconfidence leads to big failures)
In any case this is just word play. Holden seen many organizations that are/were more rational, that’s probably what he means by lack of extraordinary rationality.
Just to let you know, you’ve just made it on my list of the very few LW regulars I no longer bother replying to, due to the proven futility of any communications. In your case it is because you have a very evident ax to grind, which is incompatible with rational thought.
This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn’t he have an ax to grind? More of one really, since this ax chops trees of gold.
It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.
From http://www.usingenglish.com: “If you have an axe to grind with someone or about something, you have a grievance, a resentment and you want to get revenge or sort it out.” One can hardly call the unacknowledged emotions of resentment and needing a revenge/retribution compatible with rationality. srdiamond piled a bunch of (partially correct but irrelevant in the context of my comment) negative statements about SI, making these emotions quite clear.
That’s a restrictive definition of “ax to grind,” by the way—it’s normally used to mean any special interest in the subject: “an ulterior often selfish underlying purpose ” (Merriam-Webster’s Collegiate Dictionary)
But I might as well accept your meaning for discussion purposes. If you detect unacknowledged resentment in srdiamond, don’t you detect unacknowledged ambition in Eliezer Yudkowsky?
There’s actually good reason for the broader meaning of “ax to grind.” Any special stake is a bias. I don’t think you can say that someone who you think acts out of resentment, like srdiamond, is more intractably biased than someone who acts out of other forms of narrow self-interest, which almost invariably applies when someone defends something he gets money from.
I don’t think it’s a rational method to treat people differently, as inherently less rational, when they seem resentful. It is only one of many difficult biases. Financial interest is probably more biasing. If you think the arguments are crummy, that’s something else. But the motive—resentment or finances—should probably have little bearing on how a message is treated in serious discussion.
The impression I get from scanning their comment history is that metaphysicist means to suggest here that EY has ambitions he hasn’t acknowledged (e.g., the ambition to make money without conventional credentials), not that he fails to acknowledge any of the ambitions he has.
I don’t think it’s a rational method to treat people differently, as inherently less rational, when they seem resentful.
Thank you for this analysis, it made me think more about my motivations and their validity. I believe that my decision to permanently disengage from discussions with some people is based on the futility of such discussions in the past, not on the specific reasons they are futile. At some point I simply decide to cut my losses.
There’s actually good reason for the broader meaning of “ax to grind.” Any special stake is a bias.
Indeed, present company not excluded. The question is whether it permanently prevents the ax-grinder from listening. EY, too, has his share of unacknowledged irrationalities, but both his status and his ability to listen and to provide insights makes engaging him in a discussion a rewarding, if sometimes frustrating experience.
I don’t not know why srdiamond’s need to bash SI is so entrenched, and whether it can be remedied to a degree where he is once again worth talking to, so at this point it is instrumentally rational for me to avoid replying to him.
Well, all we really know is that he chose to. It may be that everyone he works with then privately berated him for it. That said, I share your sentiment. Actually, if SI generally endorses this sort of public “airing of dirty laundry,” I encourage others involved in the organization to say so out loud.
The largest concern from reading this isn’t really what it brings up in management context, but what it says about the SI in general. Here an area where there’s real expertise and basic books that discuss well-understood methods and they didn’t do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?
(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there’s lots of evidence available as to how effective they are.
Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as ‘self improvement’ via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone’s contemporary paperclip maximizer? By how much?
Incredibly relevant to AI risk, but analysis can’t be faked without really having technical expertise.
I haven’t actually found the right books yet, but these are the things where I decided I should find some “for beginners” text. the important insight is that I’m allowed to use these books as skill/practice/task checklists or catalogues, rather than ever reading them all straight through.
General interest:
Career
Networking
Time management
Fitness
For my own particular professional situation, skills, and interests:
For fitness, I’d found Liam Rosen’s FAQ (the ‘sticky’ from 4chan’s /fit/ board) to be remarkably helpful and information-dense. (Mainly, ‘toning’ doesn’t mean anything, and you should probably be lifting heavier weights in a linear progression, but it’s short enough to be worth actually reading through.)
these are all literally from the Nonprofits for Dummies book. [...] The history I’ve heard is that SI [...]
\
failed to read Nonprofits for Dummies,
I remember that, when Anna was managing the fellows program, she was reading books of the “for dummies” genre and trying to apply them… it’s just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were “what it takes to manage well” (i.e. “basic management”) and “what it takes to be productive”, rather than “what it takes to (help) operate a nonprofit according to best practices”. So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn’t really any cognitive space left over to effectively notice the possibility that those wouldn’t be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen’s skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)
failed to ask advisors for advice,
I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI’s current strategies with them and listened to their suggestions. But I don’t know how much she went out of her way to find people she didn’t already have reasonably reliable positive contact with, to get advice from them.
I don’t know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the “everyone outside’s psychological barriers” side of that, he was at least successful enough to keep SIAI’s public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don’t have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn’t one of those things though.
But the proper approach to retrospective judgement is generally a confusing question.
the kind of thing that makes me want to say [. . .]
The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn’t be bigger than those of the other fires they were trying to put out.
strategic plan [...] SI failed to make these kinds of plans in the first place,
There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn’t on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options—how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.
expenses tracking, funds monitoring [...] some funds monitoring was insisted upon after the large theft
There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time… well, yeah, that didn’t happen.
I agree with a paraphrase of John Maxwell’s characterization: “I’d rather hear Eliezer say ‘thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be “]organizational best practices[“]’, because this seems like a better depiction of what actually happened.” Note that this was most of the purpose of the Fellows program in the first place—to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.
Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management
FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you’re just imagining this retroactively given that that’s what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was “too competent” and that I should go do something more useful with my talent, like start another business… not “waste my time working directly at SI.”
“I’d rather hear Eliezer say ‘thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be “]organizational best practices[“]’, because this seems like a better depiction of what actually happened.”
… which Eliezer has read and responded to, noting he did indeed read just that book in 2000 when he was founding SIAI. This suggests having someone of Luke’s remarkable drive was in fact the missing piece of the puzzle.
Fascinating! I want to ask “well, why didn’t it take then?”, but if I were in Eliezer’s shoes I’d be finding this discussion almost unendurably painful right now, and it feels like what matters has already been established. And of course he’s never been the person in charge of that sort of thing, so maybe he’s not who we should be grilling anyway.
Obviously we need How to be Lukeprog for Dummies. Luke appears to have written many fragments for this, of course.
Beating oneself up with hindsight bias is IME quite normal in this sort of circumstance, but not actually productive. Grilling the people who failed makes it too easy to blame them personally, when it’s a pattern I’ve seen lots and lots, suggesting the problem is not a personal failing.
Agreed entirely—it’s definitely not a mark of a personal failing. What I’m curious about is how we can all learn to do better at the crucial rationalist skill of making use of the standard advice about prosaic tasks—which is manifestly a non-trivial skill.
The Bloody Obvious For Dummies. If only common sense were!
From the inside (of a subcompetent charity—and I must note, subcompetent charities know they’re subcompetent), it feels like there’s all this stuff you’re supposed to magically know about, and lots of “shut up and do the impossible” moments. And you do the small very hard things, in a sheer tour de force of remarkable effort. But it leads to burnout. Until the organisation makes it to competence and the correct paths are retrospectively obvious.
That actually reads to me like descriptions I’ve seen of the startup process.
The problem is that there are two efficiencies/competences here, the efficiency as in doing the accounting correctly, which is relatively easy in comparison to the second: the efficiency as in actually doing relevant novel technical work that matters. The former you could get advice from some books, the latter you won’t get any advice on, it’s a harder problem, and typical level of performance is exactly zero (even for those who get the first part right). The difference in difficulties is larger than that between building a robot kit by following instructions vs designing a ground breaking new robot and making a billion dollars off it.
The best advice to vast majority of startups is: dissolve startup and get normal jobs, starting tomorrow. The best advice to all is to take a very good look at themselves knowing that the most likely conclusion should be “dissolve and get normal jobs”. The failed startups I’ve seen so far were propelled by pure, unfounded belief in themselves (like in a movie where someone doesn’t want to jump, other says yes you can do that!! then that person jumps, but rather than sending positive message and jumping over and surviving, falls down to instant death, while the fire that the person was running away from just goes out). The successful startups, on the other hand, had very well founded belief in themselves (good track record, attainable goals), or started from a hobby project that gone successful.
Judging from the success rate that VCs have at predicting successful startups, I conclude that the “pure unfounded belief on the one hand, well-founded belief on the other” metric is not easily applied to real organizations by real observers.
Mm. This is why an incompetent nonprofit can linger for years: no-one is doing what they do, so they feel they still have to exist, even though they’re not achieving much, and would have died already as a for-profit business. I am now suspecting that the hard part for a nonprofit is something along the lines of working out what the hell you should be doing to achieve your goal. (I would be amazed if there were not extensive written-up research in this area, though I don’t know what it is.)
That book looks like the basic solution to the pattern I outline here, and from your description, most people who have any public good they want to achieve should read it around the time they think of getting a second person involved.
I don’t think this response supports your claim that these improvements “would not and could not have happened without more funding than the level of previous years.”
I know your comment is very brief because you’re busy at minicamp, but I’ll reply to what you wrote, anyway: Someone of decent rationality doesn’t just “try things until something works.” Moreover, many of the things on the list of recent improvements don’t require an Amy, a Luke, or a Louie.
I don’t even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.
When I was made Executive Director and phoned our Advisors, most of them said “Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!”
That is the kind of thing that makes me want to say that SingInst has “tested every method except the method of trying.”
Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping… these are all literally from the Nonprofits for Dummies book.
Maybe these things weren’t done for 11 years because SI’s decision-makers did make good plans but failed to execute them due to the usual defeaters. But that’s not the history I’ve heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I’ve heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.
Money wasn’t the barrier to doing many of those things, it was a gap in general rationality.
I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.
At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn’t pretty. (And I’m not the only SIer who felt this way at the time.)
But now I do feel comfortable asking people to donate to SingInst. I’m excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.
Luke has just told me (personal conversation) that what he got from my comment was, “SIAI’s difficulties were just due to lack of funding” which was not what I was trying to say at all. What I was trying to convey was more like, “I didn’t have the ability to run this organization, and knew this—people who I hoped would be able to run the organization, while I tried to produce in other areas (e.g. turning my back on everything else to get a year of FAI work done with Marcello or writing the Sequences) didn’t succeed in doing so either—and the only reason we could hang on long enough to hire Luke was that the funding was available nonetheless and in sufficient quantity that we could afford to take risks like paying Luke to stay on for a while, well before we knew he would become Executive Director”.
Does Luke disagree with this clarified point? I do not find a clear indicator in this conversation.
Update: I came out of a recent conversation with Eliezer with a higher opinion of Eliezer’s general rationality, because several things that had previously looked to me like unforced, forseeable mistakes by Eliezer now look to me more like non-mistakes or not-so-forseeable mistakes.
You’re allowed to say these things on the public Internet?
I just fell in love with SI.
Well, at our most recent board meeting I wasn’t fired, reprimanded, or even questioned for making these comments, so I guess I am. :)
Not even funny looks? ;)
It’s Luke you should have fallen in love with, since he is the one turning things around.
On the other hand I can count with one hand the number of established organisations I know of that would be sociologically capable of ceding power, status and control to Luke the way SingInst did. They took an untrained intern with essentially zero external status from past achievements and affiliations and basically decided to let him run the show (at least in terms of publicly visible initiatives). It is clearly the right thing for SingInst to do and admittedly Luke is very tall and has good hair which generally gives a boost when it comes to such selections—but still, making the appointment goes fundamentally against normal human behavior.
(Where I say “count with one hand” I am not including the use of any digits thereupon. I mean one.)
It doesn’t matter that I completely understand why this phrase was included, I still found it hilarious in a network sitcom sort of way.
Consider the implications in light of the HoldenKarnofsky’s critique about SI pretensions to high rationality.
Rationality is winning.
SI, at the same time as it was claiming extraordinary rationality, was behaving in ways that were blatantly irrational.
Although this is supposedly due to “the usual causes,” rationality (winning) subsumes overcoming akrasia.
HoldenKarnofsky is correct that SI made claims for its own extraordinary rationality at a time when its leaders weren’t rational.
Further: why should anyone give SI credibility today—when it stands convicted of self-serving misrepresentation in the recent past?
As a minor note, observe that claims of extraordinary rationality do not necessarily contradict claims of irrationality. The sanity waterline is very low.
Do you mean to imply in context here that the organizational management of SIAI at the time under discussion was above average for a nonprofit organization? Or are you just making a more general statement that a system can be irrational while demonstrating above average rationality? I certainly agree with the latter.
Are you comparing it to the average among nonprofits started, or nonprofits extant? I would guess that it was well below average for extant nonprofits, but about or slightly above average for started nonprofits. I’d guess that most nonprofits are started by people who don’t know what they’re doing and don’t know what they don’t know, and that SI probably did slightly better because the people who were being a bit stupid were at least very smart, which can help. However, I’d guess that most such nonprofits don’t live long because they don’t find a Peter Thiel to keep them alive.
Your assessment looks about right to me. I have considerable experience of averagely-incompetent nonprofits, and SIAI looks normal to me. I am strongly tempted to grab that “For Dummies” book and, if it’s good, start sending copies to people …
In the context of thomblake’s comment, I suppose nonprofits started is the proper reference class.
I don’t see what’s the point to comparing to average nonprofits. Average for-profits don’t realize any profit, and average non-profits just waste money.
I would say SIAI is best paralleled to average started ‘research’ organization that is developing some free energy something, run by non-scientists, with some hired scientists as chaff.
Sadly, I agree. Unless you look at it very closely, SIAI pattern-matches to “crackpots trying to raise money to fund their crackpottiness” fairly well. (What saves them is that their ideas are a lot better than the average crackpot.)
Yes, this.
On an arbitrary scale I just made up, below 100 degrees of rationality is “irrational”, and 0 degrees of rationality is “ordinary”. 50 is extraordinarily rational and yet irrational.
50 while you’re thinking you’re at 100 is being an extraordinary loser (overconfidence leads to big failures)
In any case this is just word play. Holden seen many organizations that are/were more rational, that’s probably what he means by lack of extraordinary rationality.
You’ve misread the post—Luke is saying that he doesn’t think the “usual defeaters” are the most likely explanation.
Correct.
Just to let you know, you’ve just made it on my list of the very few LW regulars I no longer bother replying to, due to the proven futility of any communications. In your case it is because you have a very evident ax to grind, which is incompatible with rational thought.
This comment seems strange. Is having an ax to grind opposed to rationality? Then why does Eliezer Yudkowsky, for example, not hesitate to advocate for causes such as friendly AI? Doesn’t he have an ax to grind? More of one really, since this ax chops trees of gold.
It would seem intellectual honesty would require that you say you reject discussions with people with an ax to grind, unless you grind a similar ax.
From http://www.usingenglish.com: “If you have an axe to grind with someone or about something, you have a grievance, a resentment and you want to get revenge or sort it out.” One can hardly call the unacknowledged emotions of resentment and needing a revenge/retribution compatible with rationality. srdiamond piled a bunch of (partially correct but irrelevant in the context of my comment) negative statements about SI, making these emotions quite clear.
That’s a restrictive definition of “ax to grind,” by the way—it’s normally used to mean any special interest in the subject: “an ulterior often selfish underlying purpose ” (Merriam-Webster’s Collegiate Dictionary)
But I might as well accept your meaning for discussion purposes. If you detect unacknowledged resentment in srdiamond, don’t you detect unacknowledged ambition in Eliezer Yudkowsky?
There’s actually good reason for the broader meaning of “ax to grind.” Any special stake is a bias. I don’t think you can say that someone who you think acts out of resentment, like srdiamond, is more intractably biased than someone who acts out of other forms of narrow self-interest, which almost invariably applies when someone defends something he gets money from.
I don’t think it’s a rational method to treat people differently, as inherently less rational, when they seem resentful. It is only one of many difficult biases. Financial interest is probably more biasing. If you think the arguments are crummy, that’s something else. But the motive—resentment or finances—should probably have little bearing on how a message is treated in serious discussion.
Eliezer certainly has a lot of ambition, but I am surprised to see an accusation that this ambition is unacknowledged.
The impression I get from scanning their comment history is that metaphysicist means to suggest here that EY has ambitions he hasn’t acknowledged (e.g., the ambition to make money without conventional credentials), not that he fails to acknowledge any of the ambitions he has.
Thank you for this analysis, it made me think more about my motivations and their validity. I believe that my decision to permanently disengage from discussions with some people is based on the futility of such discussions in the past, not on the specific reasons they are futile. At some point I simply decide to cut my losses.
Indeed, present company not excluded. The question is whether it permanently prevents the ax-grinder from listening. EY, too, has his share of unacknowledged irrationalities, but both his status and his ability to listen and to provide insights makes engaging him in a discussion a rewarding, if sometimes frustrating experience.
I don’t not know why srdiamond’s need to bash SI is so entrenched, and whether it can be remedied to a degree where he is once again worth talking to, so at this point it is instrumentally rational for me to avoid replying to him.
Well, all we really know is that he chose to. It may be that everyone he works with then privately berated him for it.
That said, I share your sentiment.
Actually, if SI generally endorses this sort of public “airing of dirty laundry,” I encourage others involved in the organization to say so out loud.
The largest concern from reading this isn’t really what it brings up in management context, but what it says about the SI in general. Here an area where there’s real expertise and basic books that discuss well-understood methods and they didn’t do any of that. Given that, how likely should I think it is that when the SI and mainstream AI people disagree that part of the problem may be the SI people not paying attention to basics?
(nods) The nice thing about general-purpose techniques for winning at life (as opposed to domain-specific ones) is that there’s lots of evidence available as to how effective they are.
Precisely. For example of one existing base: the existing software that searches for solutions to engineering problems. Such as ‘self improvement’ via design of better chips. Works within narrowly defined field, to cull the search space. Should we expect state of the art software of this kind to be beaten by someone’s contemporary paperclip maximizer? By how much?
Incredibly relevant to AI risk, but analysis can’t be faked without really having technical expertise.
I doubt there’s all that much of a correlation between these things to be honest.
This makes me wonder… What “for dummies” books should I be using as checklists right now? Time to set a 5-minute timer and think about it.
What did you come up with?
I haven’t actually found the right books yet, but these are the things where I decided I should find some “for beginners” text. the important insight is that I’m allowed to use these books as skill/practice/task checklists or catalogues, rather than ever reading them all straight through.
General interest:
Career
Networking
Time management
Fitness
For my own particular professional situation, skills, and interests:
Risk management
Finance
Computer programming
SAS
Finance careers
Career change
Web programming
Research/science careers
Math careers
Appraising
Real Estate
UNIX
For fitness, I’d found Liam Rosen’s FAQ (the ‘sticky’ from 4chan’s /fit/ board) to be remarkably helpful and information-dense. (Mainly, ‘toning’ doesn’t mean anything, and you should probably be lifting heavier weights in a linear progression, but it’s short enough to be worth actually reading through.)
The For Dummies series is generally very good indeed. Yes.
\
I remember that, when Anna was managing the fellows program, she was reading books of the “for dummies” genre and trying to apply them… it’s just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were “what it takes to manage well” (i.e. “basic management”) and “what it takes to be productive”, rather than “what it takes to (help) operate a nonprofit according to best practices”. So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn’t really any cognitive space left over to effectively notice the possibility that those wouldn’t be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen’s skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)
I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI’s current strategies with them and listened to their suggestions. But I don’t know how much she went out of her way to find people she didn’t already have reasonably reliable positive contact with, to get advice from them.
I don’t know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the “everyone outside’s psychological barriers” side of that, he was at least successful enough to keep SIAI’s public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don’t have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn’t one of those things though.
But the proper approach to retrospective judgement is generally a confusing question.
The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn’t be bigger than those of the other fires they were trying to put out.
There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn’t on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options—how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.
There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time… well, yeah, that didn’t happen.
I agree with a paraphrase of John Maxwell’s characterization: “I’d rather hear Eliezer say ‘thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be “]organizational best practices[“]’, because this seems like a better depiction of what actually happened.” Note that this was most of the purpose of the Fellows program in the first place—to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.
FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you’re just imagining this retroactively given that that’s what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational improvements and implementing them. It had no points for doing administrative work (besides cleaning up the physical house or giving others car rides). And it had no points for rising to management roles. It was all about getting karma on LW or writing conference papers. When I first offered to help with the organization directly, I was told I was “too competent” and that I should go do something more useful with my talent, like start another business… not “waste my time working directly at SI.”
Seems like a fair paraphrase.
This inspired me to make a blog post: You need to read Nonprofit Kit for Dummies.
… which Eliezer has read and responded to, noting he did indeed read just that book in 2000 when he was founding SIAI. This suggests having someone of Luke’s remarkable drive was in fact the missing piece of the puzzle.
Fascinating! I want to ask “well, why didn’t it take then?”, but if I were in Eliezer’s shoes I’d be finding this discussion almost unendurably painful right now, and it feels like what matters has already been established. And of course he’s never been the person in charge of that sort of thing, so maybe he’s not who we should be grilling anyway.
Obviously we need How to be Lukeprog for Dummies. Luke appears to have written many fragments for this, of course.
Beating oneself up with hindsight bias is IME quite normal in this sort of circumstance, but not actually productive. Grilling the people who failed makes it too easy to blame them personally, when it’s a pattern I’ve seen lots and lots, suggesting the problem is not a personal failing.
Agreed entirely—it’s definitely not a mark of a personal failing. What I’m curious about is how we can all learn to do better at the crucial rationalist skill of making use of the standard advice about prosaic tasks—which is manifestly a non-trivial skill.
The Bloody Obvious For Dummies. If only common sense were!
From the inside (of a subcompetent charity—and I must note, subcompetent charities know they’re subcompetent), it feels like there’s all this stuff you’re supposed to magically know about, and lots of “shut up and do the impossible” moments. And you do the small very hard things, in a sheer tour de force of remarkable effort. But it leads to burnout. Until the organisation makes it to competence and the correct paths are retrospectively obvious.
That actually reads to me like descriptions I’ve seen of the startup process.
The problem is that there are two efficiencies/competences here, the efficiency as in doing the accounting correctly, which is relatively easy in comparison to the second: the efficiency as in actually doing relevant novel technical work that matters. The former you could get advice from some books, the latter you won’t get any advice on, it’s a harder problem, and typical level of performance is exactly zero (even for those who get the first part right). The difference in difficulties is larger than that between building a robot kit by following instructions vs designing a ground breaking new robot and making a billion dollars off it.
The best advice to vast majority of startups is: dissolve startup and get normal jobs, starting tomorrow. The best advice to all is to take a very good look at themselves knowing that the most likely conclusion should be “dissolve and get normal jobs”. The failed startups I’ve seen so far were propelled by pure, unfounded belief in themselves (like in a movie where someone doesn’t want to jump, other says yes you can do that!! then that person jumps, but rather than sending positive message and jumping over and surviving, falls down to instant death, while the fire that the person was running away from just goes out). The successful startups, on the other hand, had very well founded belief in themselves (good track record, attainable goals), or started from a hobby project that gone successful.
Judging from the success rate that VCs have at predicting successful startups, I conclude that the “pure unfounded belief on the one hand, well-founded belief on the other” metric is not easily applied to real organizations by real observers.
Mm. This is why an incompetent nonprofit can linger for years: no-one is doing what they do, so they feel they still have to exist, even though they’re not achieving much, and would have died already as a for-profit business. I am now suspecting that the hard part for a nonprofit is something along the lines of working out what the hell you should be doing to achieve your goal. (I would be amazed if there were not extensive written-up research in this area, though I don’t know what it is.)
That book looks like the basic solution to the pattern I outline here, and from your description, most people who have any public good they want to achieve should read it around the time they think of getting a second person involved.