Thanks! This is my new favorite answer. I consider it to be a variant on Abram’s answer.
--I think the large number of small improvements vs. small number of large improvements thing is a red herring, a linguistic trick as you say. There’s something useful about the distinction for sure, but I don’t think we have any major disagreements here.
-- Re: “21st century acceleration is about computers taking over cognitive work from humans” will be the analog of “The industrial revolution is about engines taking over mechanical work from humans / beasts of burden.” Yes, this sounds like a good hypothesis to me. My objection to it right now is that software has already been a thing for seventy years or so. We’ve been automating lots of human cognitive work every decade for almost a century. Yet growth rates have gone down rather than up. So it seems to me that if software is going to make growth rates go up a lot, we have to say something extra to explain why it hasn’t done so yet. To be clear, I don’t think there is nothing to say on this; I would actually bet that someone has a decent explanation to give. I’m interested to hear it. (One thought might be: Steam engines started picking up steam with Watt’s engine in 1775, but growth in the UK stayed on-trend until about a century later! So maybe these things just take time.)
--And this objection is more general than that; it’s the stick I’m using to beat up everything right now. Yeah, I can imagine lots of little improvements causing an increase in the growth rate. It’s totally happened before. But for the last fifty years at least, it hasn’t been happening: There have been lots of little improvements in every sectory and some big improvements even, and loads of jobs have been automated away, but overall growth rates haven’t gone up, and have even declined. So, what reason do we have to expect that in the future, as even more improvements are found and even more jobs are automated, growth rates will go up? (Again, I would bet that there are good reasons, I just don’t know what they are right now)
--Re: From that perspective, asking “What technology short of AGI would take over cognitive work from humans, and how?” is analogous to asking “What technology short of a universal actuator would take over mechanical work from humans, and how?” I love this comparison. I totally agree that in principle we could automate everything or almost everything with narrow AI (incl. software), rather than AGI. After all, instead of building humanoid robots we built more specialized machines. However, I think that AGI is likely to come before we’ve automated everything with narrow AI, and moreover I think that if we were to automate everything with narrow AI, AGI would come very shortly thereafter, and indeed probably before we finished automating everything with narrow AI. (Because building AGI doesn’t require sampling from all jobs in the economy, but only a subset; we could automate those ones and then get AGI before we’ve automated the bulk of the jobs in the economy.) By contrast, the parallel argument about universal actuators isn’t as plausible. Having lots of really specialized actuators doesn’t help you much in getting a humanoid robot, because the bottleneck is finding the right design rather than getting stuff from one position/location to another. (Whereas for AGI, the bottleneck is finding the right design, and that’s the sort of cognitive task we are automating with narrow AI)
--Further thoughts on the analogy: I am fairly convinced that there are many important tasks which can be done most competitively by agent-like systems who are fairly general intelligences. (Perhaps the analogous thing for actuators is: There are some tasks that are better done by “General vehicles” which aren’t tied to a particular location, can travel over a large range of terrain types, and can transport a wide range of cargoes and also perform other tasks like digging and pulling things. i.e. pickup trucks. (extending the analogy further, perhaps pre-trained unsupervised world-models are like the engines that get mass-produced and put in cars, trucks, airplanes, tanks, and sometimes even fixed locations. So maybe engines are like universal actuators after all, in a sense.) And the classic problems of AI risk arise from these sorts of systems. So the question is how much relative progress will be made at automating these tasks vs. automating all the other tasks. If it’s very little, such that these are the last tasks to be automated to a significant extent (if CEOs and generals and researchers etc. are the last jobs to go!) then yeah the economy might be growing fast by the time classic AI risk concerns start to materialize. If however these jobs are automated at a similar or greater rate, then AI risk concerns will be materializing at the same time as the tech to accelerate the economy is invented, which means slightly before the economy actually accelerates.
I think you are moderately underestimating the extent of historical acceleration and so overestimating how much qualitative change would be needed:
Fair enough, thanks for the input and the data. In particular:
So it seems to me like things really did change a lot as technology improved, growing from 0.4% in 1800-1850, to 1% in 1850-1900, to .8% in 1900-1950, to 2.4% in 1950-2000. What we’re talking about is a further change similar in scope to the change from 1800 to 1850 or from 1900 to 1950.
This neatly gets down to business. The issue becomes: We’ve seen doublings of growth rate (and halvings) a time or two in the past two centuries, so it’s reasonable to expect more in the next. And insofar as the explanation for these changes in the past was “Lots of things got better across all sectors of the economy” then we should take seriously the corresponding prediction for the future. But insofar as the explanation for these changes was e.g. “Yeah lots of things got better, but to a first approximation the main drivers of progress were Engines + electricity + …” then we should expect any future changes to come along with a similar list of the main drivers of progress. And then the question is: What would those drivers be? And the answer would probably be: Software/NarrowAI. And then the question would be: OK, but we’ve had software/NarrowAI for a while, why hasn’t it had an effect yet? And the answer would be… well, I don’t know what it is yet but I’m reasonably confident there is one.
--I agree GDP per capita in frontier economies is a more relevant metric than GWP. Why don’t we use that instead of GWP?
Thanks! This is my new favorite answer. I consider it to be a variant on Abram’s answer.
--I think the large number of small improvements vs. small number of large improvements thing is a red herring, a linguistic trick as you say. There’s something useful about the distinction for sure, but I don’t think we have any major disagreements here.
-- Re: “21st century acceleration is about computers taking over cognitive work from humans” will be the analog of “The industrial revolution is about engines taking over mechanical work from humans / beasts of burden.” Yes, this sounds like a good hypothesis to me. My objection to it right now is that software has already been a thing for seventy years or so. We’ve been automating lots of human cognitive work every decade for almost a century. Yet growth rates have gone down rather than up. So it seems to me that if software is going to make growth rates go up a lot, we have to say something extra to explain why it hasn’t done so yet. To be clear, I don’t think there is nothing to say on this; I would actually bet that someone has a decent explanation to give. I’m interested to hear it. (One thought might be: Steam engines started picking up steam with Watt’s engine in 1775, but growth in the UK stayed on-trend until about a century later! So maybe these things just take time.)
--And this objection is more general than that; it’s the stick I’m using to beat up everything right now. Yeah, I can imagine lots of little improvements causing an increase in the growth rate. It’s totally happened before. But for the last fifty years at least, it hasn’t been happening: There have been lots of little improvements in every sectory and some big improvements even, and loads of jobs have been automated away, but overall growth rates haven’t gone up, and have even declined. So, what reason do we have to expect that in the future, as even more improvements are found and even more jobs are automated, growth rates will go up? (Again, I would bet that there are good reasons, I just don’t know what they are right now)
--Re: From that perspective, asking “What technology short of AGI would take over cognitive work from humans, and how?” is analogous to asking “What technology short of a universal actuator would take over mechanical work from humans, and how?” I love this comparison. I totally agree that in principle we could automate everything or almost everything with narrow AI (incl. software), rather than AGI. After all, instead of building humanoid robots we built more specialized machines. However, I think that AGI is likely to come before we’ve automated everything with narrow AI, and moreover I think that if we were to automate everything with narrow AI, AGI would come very shortly thereafter, and indeed probably before we finished automating everything with narrow AI. (Because building AGI doesn’t require sampling from all jobs in the economy, but only a subset; we could automate those ones and then get AGI before we’ve automated the bulk of the jobs in the economy.) By contrast, the parallel argument about universal actuators isn’t as plausible. Having lots of really specialized actuators doesn’t help you much in getting a humanoid robot, because the bottleneck is finding the right design rather than getting stuff from one position/location to another. (Whereas for AGI, the bottleneck is finding the right design, and that’s the sort of cognitive task we are automating with narrow AI)
--Further thoughts on the analogy: I am fairly convinced that there are many important tasks which can be done most competitively by agent-like systems who are fairly general intelligences. (Perhaps the analogous thing for actuators is: There are some tasks that are better done by “General vehicles” which aren’t tied to a particular location, can travel over a large range of terrain types, and can transport a wide range of cargoes and also perform other tasks like digging and pulling things. i.e. pickup trucks. (extending the analogy further, perhaps pre-trained unsupervised world-models are like the engines that get mass-produced and put in cars, trucks, airplanes, tanks, and sometimes even fixed locations. So maybe engines are like universal actuators after all, in a sense.) And the classic problems of AI risk arise from these sorts of systems. So the question is how much relative progress will be made at automating these tasks vs. automating all the other tasks. If it’s very little, such that these are the last tasks to be automated to a significant extent (if CEOs and generals and researchers etc. are the last jobs to go!) then yeah the economy might be growing fast by the time classic AI risk concerns start to materialize. If however these jobs are automated at a similar or greater rate, then AI risk concerns will be materializing at the same time as the tech to accelerate the economy is invented, which means slightly before the economy actually accelerates.
Fair enough, thanks for the input and the data. In particular:
This neatly gets down to business. The issue becomes: We’ve seen doublings of growth rate (and halvings) a time or two in the past two centuries, so it’s reasonable to expect more in the next. And insofar as the explanation for these changes in the past was “Lots of things got better across all sectors of the economy” then we should take seriously the corresponding prediction for the future. But insofar as the explanation for these changes was e.g. “Yeah lots of things got better, but to a first approximation the main drivers of progress were Engines + electricity + …” then we should expect any future changes to come along with a similar list of the main drivers of progress. And then the question is: What would those drivers be? And the answer would probably be: Software/NarrowAI. And then the question would be: OK, but we’ve had software/NarrowAI for a while, why hasn’t it had an effect yet? And the answer would be… well, I don’t know what it is yet but I’m reasonably confident there is one.
--I agree GDP per capita in frontier economies is a more relevant metric than GWP. Why don’t we use that instead of GWP?