Since XiXiDu also asked this question on my blog, I answered over there.
I tell you that all you have to do is to read the LessWrong Sequences and the publications written by the SIAI to agree that working on AI is much more important than climate change, are you going to take the time and do it?
I have read most of those things, and indeed I’ve been interested in AI and the possibility of a singularity at least since college (say, 1980). That’s why I interviewed Yudkowsky.
I have read most of those things, and indeed I’ve been interested in AI and the possibility of a singularity at least since college (say, 1980).
That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn’t manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?
I should also state how I would answer my question. My answer would be No. The SIAI deserves funding but since it currently receives $500,000 per year I would not recommend someone to donate another $100,000 right now. The reason is that I think that there are valid arguments that justify the existence of such an organisation. But there are no reasons to expect that they currently need more money. The SIAI does publish no progress report and does not disclose how it uses the money it gets. There are various other issues to decide that the SIAI does currently not deserve more donations. That is not to say that the problem of risks from AI may not deserve more funding, but differently. My current uncertainty about how urgent and substantive the risks are also does contribute to the decision that the SIAI is at this time well-funded.
I’m asking people like you to assess how likely it is that I am wrong about my judgement and if I should make it a priority to seek more information right now or concentrate on other projects.
Just a minor correction: this cannot be a true statement to make of an American 501(c)3 charity because it would be illegal of them to not disclose what they’re spending money on in their Form 990. Hence, it’s easy to examine SIAI/MIRI, Girl Scouts, Edge Foundation, Lifeboat Foundation, JSTOR, ALCOR… Really, all the information is there for anyone who wants to know it, you can download for free, one just has to just not be lazy and not assume that it doesn’t exist.
In 2009, they received 432,139 of “gifts”, made 194,686 by putting on a conference—and paid Yudkowsky 95,550 and Vassar 52,083. Yudkowsky probably also got a fair bit of the 83,934 spent on project 4c. 400,000 was also spent on the things described at the end of the document. Figures are all USD.
“Probably?” According to what priors? Do not make stuff up. As of 2013, MIRI has never paid anyone more than $99K in one year, and IIRC the $95K shown there was due to an error by the payroll service we were using which accidentally shifted one month of my salary backward by one year (paid on Dec 31 instead of Jan 1).
I confirm the payroll error part; I remember speaking to Amy about it a couple times, though it happened shortly before my time. I also suspect MIRI has never paid anymore than $99k in one year, but I haven’t looked it up.
Since XiXiDu also asked this question on my blog, I answered over there.
I have read most of those things, and indeed I’ve been interested in AI and the possibility of a singularity at least since college (say, 1980). That’s why I interviewed Yudkowsky.
That answers my questions. There are only two options, either there is no strong case for risks from AI or a world-class mathematician like you didn’t manage to understand the arguments after trying for 30 years. For me that means that I can only hope to be much smarter than you (to understand the evidence myself) or to conclude that Yudkowsky et al. are less intelligent than you are. No offense, but what other option is there?
Understanding of the singularity is not a monotonically increasing function of intelligence.
I should also state how I would answer my question. My answer would be No. The SIAI deserves funding but since it currently receives $500,000 per year I would not recommend someone to donate another $100,000 right now. The reason is that I think that there are valid arguments that justify the existence of such an organisation. But there are no reasons to expect that they currently need more money. The SIAI does publish no progress report and does not disclose how it uses the money it gets. There are various other issues to decide that the SIAI does currently not deserve more donations. That is not to say that the problem of risks from AI may not deserve more funding, but differently. My current uncertainty about how urgent and substantive the risks are also does contribute to the decision that the SIAI is at this time well-funded.
I’m asking people like you to assess how likely it is that I am wrong about my judgement and if I should make it a priority to seek more information right now or concentrate on other projects.
Just a minor correction: this cannot be a true statement to make of an American 501(c)3 charity because it would be illegal of them to not disclose what they’re spending money on in their Form 990. Hence, it’s easy to examine SIAI/MIRI, Girl Scouts, Edge Foundation, Lifeboat Foundation, JSTOR, ALCOR… Really, all the information is there for anyone who wants to know it, you can download for free, one just has to just not be lazy and not assume that it doesn’t exist.
They do do various things like that from time to time—e.g.; http://singinst.org/achievements
Up to 2008, almost half of it went into paying their own salaries, IIRC.
The SIAI accounts are on Guidestar. You have to register, though.
In 2009, they received 432,139 of “gifts”, made 194,686 by putting on a conference—and paid Yudkowsky 95,550 and Vassar 52,083. Yudkowsky probably also got a fair bit of the 83,934 spent on project 4c. 400,000 was also spent on the things described at the end of the document. Figures are all USD.
“Probably?” According to what priors? Do not make stuff up. As of 2013, MIRI has never paid anyone more than $99K in one year, and IIRC the $95K shown there was due to an error by the payroll service we were using which accidentally shifted one month of my salary backward by one year (paid on Dec 31 instead of Jan 1).
I confirm the payroll error part; I remember speaking to Amy about it a couple times, though it happened shortly before my time. I also suspect MIRI has never paid anymore than $99k in one year, but I haven’t looked it up.
They do do various things like that from time to time—e.g.; http://singinst.org/achievements