...if there were a 100 good papers in about it in the right journals;
Just one paper (AI safety or FAI design)...I will be very impressed. I will donate a minimum of $10 ($20 for a technical paper on FAI design) per peer-reviewed research paper per journal to the SIAI.
I doubt I’ll have to donate even once within the next 50 years. But I would be happy to be proven wrong.
There are some of those in the works, but note that the Future of Humanity Institute converts funds into research papers on these topics as well (Nick Bostrom is working on an academic book now which pretty comprehensively summarizes the work of folk around SIAI).
FHI accepts donations, and estimates a cost of about $200k (USD, although currency swings may have changed this number) per 2 year postdoc, including travel, share of overhead and administrative costs, conferences, journal fees, etc. As part of Oxford, they have comparative advantage in hiring academics and lending prestige to the work. You can look at their research record on their website and assess things that way.
note that the Future of Humanity Institute converts funds into research papers on these topics as well
Converts funds, or converts marginal funds?
I’ve been meaning to start the SIAI vs FHI conversation here in its own thread for some time, if people don’t think it falls afoul of Common Interest of Many Causes.
Marginal funds. FHI is funding-limited in its number of positions there. The marginal hires do not average Bostrom-level productivity (it’s hard to get academics to pursue a research agenda other than one they were already working on), but you can look at the last several hires and average across them.
I don’t know who counts as the last several hires, but while I’m sure everyone at FHI does fine work, only Bostrom and Sandberg seemto be doing research related to AI risks. Also Hanson, I suppose, to the extent that he counts as working at FHI. I don’t dispute that some marginal funds would on expectation go to research on these topics, but surely it would be a lot less than half.
Much of the dispersion is caused by the lack of unrestricted funds (and lack of future funding guarantees). Since we don’t have enough funding from private philanthropists, we have to chase academic funding pots, and that then forces us to do some work that is less relevant to the important problems we would rather be working on. It would be unfortunate if potential private funders then looked at the fact that we’ve done some less-relevant work as a reason not to give.
Thank you for weighing in! Your point sounds valid. After taking it into account, if you considered marginal dollars donated to FHI without explicit earmarking, what is your estimate for the fraction of such dollars that end up causing a dollar’s worth of research into topics that would be seen as highly relevant by someone with roughly SIAI-typical estimates for the future?
A high fraction. “A dollar’s worth of research” is not a well-defined quantity—that is, the worth of the research produced by a dollar varies a lot depending on whom the dollar is given to. I like to think FHI is good at converting dollars into research. The kind of research I’d prefer to do with unrestricted funds at the moment probably coincides pretty well with what a person with SIAI-typical estimates would prefer, though what can be researched also depends on the capabilities and interests of the research staff one can recruit. (There are various tradeoffs here—e.g. a weaker researcher who has a long record of working in this area or taking a chance with a slighly stronger researcher and risk that she will do irrelevant work? headhunting somebody who is already actively contributing to the area or attempt to involve a new mind who would otherwise not have contributed? etc.)
There are also indirect effects, which might lead to the fraction being larger than one—for example, if discussions, conferences, and various kinds of influence encourage external researchers to enter the field. FHI does some of that, as does the SIAI.
Thanks. When I said “a dollar’s worth of research”, I had in mind the estimate Carl mentioned of $200k per 2-year postdoc. I guess that doesn’t affect the fraction question.
The details depend on how you count the methodology/general existential risks stuff, e.g. the “probing the improbable” paper by Ord, Sandberg, and Hillerbrand. Also note that many of Bostrom’s and Sandberg’s publications, including the catastrophic risks book, and events like the Winter Intelligence Conference benefit from help by other FHI staff. Still, some hires have definitely done essentially no existential risk-relevant work. My guess is something like 1 Sandberg or Ord equivalent per 2-3 hires (with differential attrition leading to accumulation of the good).
Also, given earmarked funding they can create positions specifically for machine intelligence issues, the results of which are easier to track (the output of that person).
Just one paper (AI safety or FAI design)...I will be very impressed. … I doubt I’ll have to donate even once within the next 50 years. But I would be happy to be proven wrong.
Comments like this are evidence that focus on getting papers into journals is important, relative to the amount of effort currently going into it.
And every time someone doesn’t make a comment like this, it’s evidence that such a focus is unimportant, so what makes you think it comes out one way rather than the other on net?
LessWrong seems significantly more likely than normal to produce vocal dissent (“I wouldn’t find this useful”) rather than silence. That said, LessWrong is probably also not the majority of AI researchers, who are the actual target audience, so using ourselves as a “test market” is probably flawed on a few levels...
Just one paper (AI safety or FAI design)...I will be very impressed. I will donate a minimum of $10 ($20 for a technical paper on FAI design) per peer-reviewed research paper per journal to the SIAI.
I doubt I’ll have to donate even once within the next 50 years. But I would be happy to be proven wrong.
There are some of those in the works, but note that the Future of Humanity Institute converts funds into research papers on these topics as well (Nick Bostrom is working on an academic book now which pretty comprehensively summarizes the work of folk around SIAI).
FHI accepts donations, and estimates a cost of about $200k (USD, although currency swings may have changed this number) per 2 year postdoc, including travel, share of overhead and administrative costs, conferences, journal fees, etc. As part of Oxford, they have comparative advantage in hiring academics and lending prestige to the work. You can look at their research record on their website and assess things that way.
Converts funds, or converts marginal funds?
I’ve been meaning to start the SIAI vs FHI conversation here in its own thread for some time, if people don’t think it falls afoul of Common Interest of Many Causes.
Marginal funds. FHI is funding-limited in its number of positions there. The marginal hires do not average Bostrom-level productivity (it’s hard to get academics to pursue a research agenda other than one they were already working on), but you can look at the last several hires and average across them.
I don’t know who counts as the last several hires, but while I’m sure everyone at FHI does fine work, only Bostrom and Sandberg seem to be doing research related to AI risks. Also Hanson, I suppose, to the extent that he counts as working at FHI. I don’t dispute that some marginal funds would on expectation go to research on these topics, but surely it would be a lot less than half.
Much of the dispersion is caused by the lack of unrestricted funds (and lack of future funding guarantees). Since we don’t have enough funding from private philanthropists, we have to chase academic funding pots, and that then forces us to do some work that is less relevant to the important problems we would rather be working on. It would be unfortunate if potential private funders then looked at the fact that we’ve done some less-relevant work as a reason not to give.
Thank you for weighing in! Your point sounds valid. After taking it into account, if you considered marginal dollars donated to FHI without explicit earmarking, what is your estimate for the fraction of such dollars that end up causing a dollar’s worth of research into topics that would be seen as highly relevant by someone with roughly SIAI-typical estimates for the future?
A high fraction. “A dollar’s worth of research” is not a well-defined quantity—that is, the worth of the research produced by a dollar varies a lot depending on whom the dollar is given to. I like to think FHI is good at converting dollars into research. The kind of research I’d prefer to do with unrestricted funds at the moment probably coincides pretty well with what a person with SIAI-typical estimates would prefer, though what can be researched also depends on the capabilities and interests of the research staff one can recruit. (There are various tradeoffs here—e.g. a weaker researcher who has a long record of working in this area or taking a chance with a slighly stronger researcher and risk that she will do irrelevant work? headhunting somebody who is already actively contributing to the area or attempt to involve a new mind who would otherwise not have contributed? etc.)
There are also indirect effects, which might lead to the fraction being larger than one—for example, if discussions, conferences, and various kinds of influence encourage external researchers to enter the field. FHI does some of that, as does the SIAI.
Thanks. When I said “a dollar’s worth of research”, I had in mind the estimate Carl mentioned of $200k per 2-year postdoc. I guess that doesn’t affect the fraction question.
The details depend on how you count the methodology/general existential risks stuff, e.g. the “probing the improbable” paper by Ord, Sandberg, and Hillerbrand. Also note that many of Bostrom’s and Sandberg’s publications, including the catastrophic risks book, and events like the Winter Intelligence Conference benefit from help by other FHI staff. Still, some hires have definitely done essentially no existential risk-relevant work. My guess is something like 1 Sandberg or Ord equivalent per 2-3 hires (with differential attrition leading to accumulation of the good).
Also, given earmarked funding they can create positions specifically for machine intelligence issues, the results of which are easier to track (the output of that person).
But presumably that would only be a consideration if FHI received very large amounts of such earmarked funding?
$200k USD for one postdoc. One could save up for that with a donor-advised fund alone or with others, or use something like kickstarter.com.
Comments like this are evidence that focus on getting papers into journals is important, relative to the amount of effort currently going into it.
And every time someone doesn’t make a comment like this, it’s evidence that such a focus is unimportant, so what makes you think it comes out one way rather than the other on net?
LessWrong seems significantly more likely than normal to produce vocal dissent (“I wouldn’t find this useful”) rather than silence. That said, LessWrong is probably also not the majority of AI researchers, who are the actual target audience, so using ourselves as a “test market” is probably flawed on a few levels...
Does this one count?
It has had some peer review—and should be in the AGI-11 Conference Proceedings.