I do think the bigger point is that your argument is a tiny effect, even if its correct, so gets dwarfed by any number of random other things (like better educated people, lower cumulative probability of war) and even moreso by the general arguments that suggest the effects of growth would be differentially positive.
But if you accept all of your argument except the last step, Katja’s point seems right, and so I think you’ve gotten the sign on this particular effect wrong. More economic growth means more work per person and the same number of people working in parallel—do you disagree with that? (If so, do you think that its because more economic activity means a higher population, or that it means diverting people from other tasks to AI? I agree there will be a little bit of the latter, but its a pretty small effect and you haven’t even invoked the relevant facts about the world—marginal AI spending is higher than average AI spending—in your argument.)
So if you care about parallelization in time, the effect is basically neutral (the same number of people are working on AI at any given time). If you care about parallelization across people, the effect is significant and positive, because each person does a larger fraction of the total project of building AI. It’s not obvious to me that insight-constrained projects (as opposed to the “normal” AI) care particularly about either. But if they care somewhat about both, then this would be a positive effect. They would have to care several times more about parallelization in time than parallelization in people in order for you to have gotten the sign right.
My model of Paul Christiano does not agree with this statement.
I was fortunate to discuss this with Paul and Katja yesterday, and he seemed to feel that this was a strong argument.
...odd. I’m beginning to wonder if we’re wildly at skew angles here.
I do think the bigger point is that your argument is a tiny effect, even if its correct, so gets dwarfed by any number of random other things (like better educated people, lower cumulative probability of war) and even moreso by the general arguments that suggest the effects of growth would be differentially positive.
But if you accept all of your argument except the last step, Katja’s point seems right, and so I think you’ve gotten the sign on this particular effect wrong. More economic growth means more work per person and the same number of people working in parallel—do you disagree with that? (If so, do you think that its because more economic activity means a higher population, or that it means diverting people from other tasks to AI? I agree there will be a little bit of the latter, but its a pretty small effect and you haven’t even invoked the relevant facts about the world—marginal AI spending is higher than average AI spending—in your argument.)
So if you care about parallelization in time, the effect is basically neutral (the same number of people are working on AI at any given time). If you care about parallelization across people, the effect is significant and positive, because each person does a larger fraction of the total project of building AI. It’s not obvious to me that insight-constrained projects (as opposed to the “normal” AI) care particularly about either. But if they care somewhat about both, then this would be a positive effect. They would have to care several times more about parallelization in time than parallelization in people in order for you to have gotten the sign right.