My best guess is that the next 10 (or maybe 20) years in AI will look a bit like the late 1990s and early 2000s when a lot of internet companies were starting up. If you look at the top companies in the world by market cap, most of them are now internet companies. We can compare that list to the top companies in 1990, which were companies like General Motors, Ford, and General Electric. In other words, internet companies totally rocked the boat in the last few decades.
From a normal business standpoint, the rise of the internet was a massive economic shift, and continues to be the dominant engine driving growth in the US financial markets. Since 2004, the Vanguard Information Technology ETF went up about 734%, compared to a rise of only 294% in the S&P 500 during the same time period.
And yet, overall economic growth is still sluggish. Despite the fact that we went from a world in which almost no one used computers, to a world in which computers are an essential part of almost everyone’s daily lives, our material world is surprisingly still pretty similar. The last two decades of growth have been the slowest decades in over a century.
If you just focused on the fact that our smartphones are way better than anything that came before (and super neat), you’d miss the fact that smartphones aren’t making the world go crazy. Likewise, I don’t doubt that we will get a ton of new cool AI products that people will use. I also think it’s likely that AI is going to rock the boat in the financial markets, just like internet companies did 20 years ago. I even think it’s likely that we’ll see the rise of new AI products that become completely ubiquitous, transforming our lives.
For some time, people will see these AI products as a big deal. Lots of people will speculate about how the next logical step will be full automation of all labor. But I still think that by the end of the decade, and even the next, these predictions won’t be vindicated. People will still show up to work to get paid, the government will still operate just as it did before, and we’ll all still be biological humans.
Why? Because automating labor is hard. To present just one illustration, we still don’t have the ability to automate speech transcription. Look into current transcription services and you’ll see why. Writing an AI that can transcribe some 75% of your words correctly turned out to be relatively easy. But it’s been much harder to do the more nuanced stuff, like recognize which speakers are saying what, correctly transcribe the uhhs and ahhs and made-up names like, say, “Glueberry”.
My impression is that when people look at current AI tech, are impressed, and then think in their heads, “Well, if we’re already here, then we’ll probably be able to automate all human labor within, say, 10 years” they just aren’t thinking through all the very complicated difficulties that are actually needed to fully replace labor. And that makes sense, since that stuff is less salient in our minds: we can see the impressive feats that AI can do already, since we have it and the demos are there for us to marvel at. It’s harder for us to see all the stuff AI can’t yet do.
It sounds like we might not disagree a lot? We’ve exchanged a few responses to each other in the past that may have given the impression that we disagree strongly on AI timelines, but plausibly we just frame things differently.
Roughly speaking, when I say “AI timelines” I’m referring to the time during which AI fundamentally transforms the world, not necessarily when “an AGI” is built somewhere. I think this framing is more useful because it tracks more closely what EAs actually care about when they talk about AI.
I also don’t think that the moment GDP accelerates is the best moment to intervene, though my framing here is different. I’d be inclined to discard a binary model of intervention during which all efforts after some critical threshold are wasted. Rather, intervention is a lot like the time-value of money in finance. In general, it’s better to have money now rather than later; similarly, it’s better to intervene earlier rather than later. But the value of interventions diminish continuously as time goes onwards, eventually getting near zero.
The best way to intervene also depends a lot on how far we are from AI-induced growth. So, for example, it might not be worth trying to align current algorithms, because that sort of work will be relatively more useful when we know which algorithms are actually being used to build advanced AI. Relatively speaking, it might be worth more right now to build institutional resilience, in the sense of creating incentive structures for actors to care about alignment. And hypothetically, if I knew about the AI alignment problem in, say, the 1920s, I might have recommended investing in the stock market until we have a better sense as to what form AI will take.
The two posts I linked above explain my view on what EAs should care about for timelines; it’s pretty similar to yours. I call it AI-PONR, but basically it just means “a chunk of time where the value of interventions drops precipitously, to a level significantly below its present value, such that when we make our plans for how to use our money, our social capital, our research time, etc. we should basically plan to have accomplished what we want to have accomplished by then.” Things that could cause AI-PONR: An AI takes over the world. Persuasion tools destroy collective epistemology. AI R&D tools make it so easy to build WMD’s that we get a vulnerable world. Etc. Note that I disagree that the time when AI fundamentally transforms the world is what we care about, because I think AI-PONR will come before that point. (By fundamentally transforms the world, do you mean something notably different from “accelerates GDP?”) I’d be interested to hear your thoughts on this framework, since it seems you’ve been thinking along similar lines and might have more expertise than me with the background concepts from economics.
So it sounds like we do disagree on something substantive, and it’s how early in takeoff AI-PONR happens. And/or what timelines look like. I think there’s, like, a 25% chance that nanobots will be disassembling large parts of Earth by 2030, but I think that the 2030′s will look exactly as you predict up until it’s too late.
My best guess is that the next 10 (or maybe 20) years in AI will look a bit like the late 1990s and early 2000s when a lot of internet companies were starting up. If you look at the top companies in the world by market cap, most of them are now internet companies. We can compare that list to the top companies in 1990, which were companies like General Motors, Ford, and General Electric. In other words, internet companies totally rocked the boat in the last few decades.
From a normal business standpoint, the rise of the internet was a massive economic shift, and continues to be the dominant engine driving growth in the US financial markets. Since 2004, the Vanguard Information Technology ETF went up about 734%, compared to a rise of only 294% in the S&P 500 during the same time period.
And yet, overall economic growth is still sluggish. Despite the fact that we went from a world in which almost no one used computers, to a world in which computers are an essential part of almost everyone’s daily lives, our material world is surprisingly still pretty similar. The last two decades of growth have been the slowest decades in over a century.
If you just focused on the fact that our smartphones are way better than anything that came before (and super neat), you’d miss the fact that smartphones aren’t making the world go crazy. Likewise, I don’t doubt that we will get a ton of new cool AI products that people will use. I also think it’s likely that AI is going to rock the boat in the financial markets, just like internet companies did 20 years ago. I even think it’s likely that we’ll see the rise of new AI products that become completely ubiquitous, transforming our lives.
For some time, people will see these AI products as a big deal. Lots of people will speculate about how the next logical step will be full automation of all labor. But I still think that by the end of the decade, and even the next, these predictions won’t be vindicated. People will still show up to work to get paid, the government will still operate just as it did before, and we’ll all still be biological humans.
Why? Because automating labor is hard. To present just one illustration, we still don’t have the ability to automate speech transcription. Look into current transcription services and you’ll see why. Writing an AI that can transcribe some 75% of your words correctly turned out to be relatively easy. But it’s been much harder to do the more nuanced stuff, like recognize which speakers are saying what, correctly transcribe the uhhs and ahhs and made-up names like, say, “Glueberry”.
My impression is that when people look at current AI tech, are impressed, and then think in their heads, “Well, if we’re already here, then we’ll probably be able to automate all human labor within, say, 10 years” they just aren’t thinking through all the very complicated difficulties that are actually needed to fully replace labor. And that makes sense, since that stuff is less salient in our minds: we can see the impressive feats that AI can do already, since we have it and the demos are there for us to marvel at. It’s harder for us to see all the stuff AI can’t yet do.
Makes sense. I also agree that this is what the 2030′s will look like; I don’t expect GDP growth to accelerate until it’s already too late.
The quest for testable-prior-to-AI-PONR predictions continues...
It sounds like we might not disagree a lot? We’ve exchanged a few responses to each other in the past that may have given the impression that we disagree strongly on AI timelines, but plausibly we just frame things differently.
Roughly speaking, when I say “AI timelines” I’m referring to the time during which AI fundamentally transforms the world, not necessarily when “an AGI” is built somewhere. I think this framing is more useful because it tracks more closely what EAs actually care about when they talk about AI.
I also don’t think that the moment GDP accelerates is the best moment to intervene, though my framing here is different. I’d be inclined to discard a binary model of intervention during which all efforts after some critical threshold are wasted. Rather, intervention is a lot like the time-value of money in finance. In general, it’s better to have money now rather than later; similarly, it’s better to intervene earlier rather than later. But the value of interventions diminish continuously as time goes onwards, eventually getting near zero.
The best way to intervene also depends a lot on how far we are from AI-induced growth. So, for example, it might not be worth trying to align current algorithms, because that sort of work will be relatively more useful when we know which algorithms are actually being used to build advanced AI. Relatively speaking, it might be worth more right now to build institutional resilience, in the sense of creating incentive structures for actors to care about alignment. And hypothetically, if I knew about the AI alignment problem in, say, the 1920s, I might have recommended investing in the stock market until we have a better sense as to what form AI will take.
The two posts I linked above explain my view on what EAs should care about for timelines; it’s pretty similar to yours. I call it AI-PONR, but basically it just means “a chunk of time where the value of interventions drops precipitously, to a level significantly below its present value, such that when we make our plans for how to use our money, our social capital, our research time, etc. we should basically plan to have accomplished what we want to have accomplished by then.” Things that could cause AI-PONR: An AI takes over the world. Persuasion tools destroy collective epistemology. AI R&D tools make it so easy to build WMD’s that we get a vulnerable world. Etc. Note that I disagree that the time when AI fundamentally transforms the world is what we care about, because I think AI-PONR will come before that point. (By fundamentally transforms the world, do you mean something notably different from “accelerates GDP?”) I’d be interested to hear your thoughts on this framework, since it seems you’ve been thinking along similar lines and might have more expertise than me with the background concepts from economics.
So it sounds like we do disagree on something substantive, and it’s how early in takeoff AI-PONR happens. And/or what timelines look like. I think there’s, like, a 25% chance that nanobots will be disassembling large parts of Earth by 2030, but I think that the 2030′s will look exactly as you predict up until it’s too late.