A key step in your argument is the importance of the parallel/serial distinction. However we already have some reasonably effective institutions for making naturally serial work parallelizable (e.g. peer review), and more are arising. This has allowed new areas of mathematics to be explored pretty quickly. These provide a valve which should mean that extra work on FAI is only slightly less effective than you’d initially think.
You could still think that this was the dominant point if economic growth would increase the speed of both AI and AI safety work to the same degree, but I think this is very unclear. Even granting points (1)-(3) of Paul’s disjunction, it seems like the most important question is the comparative relationship between the elasticity of AI work with economic growth and the elasticity of FAI work with economic growth.
I currently incline to think that in general in a more prosperous society, proportionally more people are likely to fund or be drawn into work which is a luxury in the short term, and AI safety work fits into this category (whereas AI work is more likely to have short term economic benefits, so get funding). However, this question could do with more investigation! In particular I think it’s plausible that the current state of EA movement growth means we’re not in a general position. I haven’t yet thought carefully through all of the ramifications of this.
By the way, I can guess at some of the source of disagreement here. The language of your post (e.g. “I wish I had more time, not less, in which to work on FAI”) suggests that you imagine that the current people working on FAI will account for a reasonable proportion of the total AI safety work which will be done. I think it more likely that there will be a lot more people working on it before we get AI, so the growth rate dominates.
A key step in your argument is the importance of the parallel/serial distinction. However we already have some reasonably effective institutions for making naturally serial work parallelizable (e.g. peer review), and more are arising. This has allowed new areas of mathematics to be explored pretty quickly. These provide a valve which should mean that extra work on FAI is only slightly less effective than you’d initially think.
You could still think that this was the dominant point if economic growth would increase the speed of both AI and AI safety work to the same degree, but I think this is very unclear. Even granting points (1)-(3) of Paul’s disjunction, it seems like the most important question is the comparative relationship between the elasticity of AI work with economic growth and the elasticity of FAI work with economic growth.
I currently incline to think that in general in a more prosperous society, proportionally more people are likely to fund or be drawn into work which is a luxury in the short term, and AI safety work fits into this category (whereas AI work is more likely to have short term economic benefits, so get funding). However, this question could do with more investigation! In particular I think it’s plausible that the current state of EA movement growth means we’re not in a general position. I haven’t yet thought carefully through all of the ramifications of this.
Nick Beckstead made comments on some related questions in this talk: http://intelligence.org/wp-content/uploads/2013/07/Beckstead-Evaluating-Options-Using-Far-Future-Standards.pdf
By the way, I can guess at some of the source of disagreement here. The language of your post (e.g. “I wish I had more time, not less, in which to work on FAI”) suggests that you imagine that the current people working on FAI will account for a reasonable proportion of the total AI safety work which will be done. I think it more likely that there will be a lot more people working on it before we get AI, so the growth rate dominates.