Sometimes people express concern that AIs may replace them in the workplace. This is (mostly) silly. Not that it won’t happen, but you’ve gotta break some eggs to make an industrial revolution. This is just ‘how economies work’ (whether or not they can / should work this way is a different question altogether).
The intrinsic fear of joblessness-resulting-from-automation is tantamount to worrying that curing infectious diseases would put gravediggers out of business.
There is a special case here, though: double digit unemployment (and youth unemployment, in particular) is a major destabilizing force in politics. You definitely don’t want an automation wave so rapid that the jobless and nihilistic youth mount a civil war, sharply curtailing your ability to govern the dangerous technologies which took everyone’s jobs in the first place.
As AI systems become more expensive, and more powerful, and pressure to deploy them profitably increases, I’m fairly concerned that we’ll see a massive hollowing out of many white collar professions, resulting in substantial civil unrest, violence, chaos. I’m not confident that we’ll get (i.e.) a UBI (or that it would meaningfully change the situation even if we did), and I’m not confident that there’s enough inertia in existing economic structures to soften the blow.
The IMF estimates that current tech (~GPT 4 at launch) can automate ~30% of human labor performed in the US. That’s a big, scary number. About half of these, they imagine, are the kinds of things you always want more of anyway, and that this complementarity will just drive production in that 15% of cases. The other 15%, though, probably just stop existing as jobs altogether (for various reasons, I think a 9:1 replacement rate is more likely than full automation, with current tech).
This mostly isn’t happening yet because you need an ML engineer to commit Serious Time to automating away That Job In Particular. ML engineers are expensive, and usually not specced for the kind of client-facing work that this would require (i.e. breaking down tasks that are part of a job, knowing what parts can be automated, and via what mechanisms, be that purpose-built models, fine-tuning, a prompt library for a human operator, some specialized scaffolding...). There’s just a lot of friction and lay-speak necessary to accomplish this, and it’s probably not economically worth it for some subset of necessary parties (ML engineers can make more elsewhere than small business owners can afford to pay them to automate things away, for instance).
So we’ve got a bottleneck, and on the other side of it, this speculative 15% leap in unemployment. That 15% potential leap, though, is climbing as capabilities increase (this is tautologically true; “drop in replacement for a remote worker” is one major criteria used in discussions about AI progress).
I don’t expect 15% unemployment to destabilize the government (Great Depression peak was 25%, which is a decent lower bound on ‘potentially dangerous’ levels of unemployment in the US). But I do expect that 15% powder keg to grow in size, and potentially cross into dangerous territory before it’s lit.
Previously, I’d actually arrived at that 30% number myself (almost exactly one year ago), but I had initially expected:
Labs would devote substantial resources to this automation, and it would happen more quickly than it has so far.
All of these jobs were just on the chopping block (frankly, I’m not sure how much I buy the complementarity argument, but I am An Internet Crank, and they are the International Monetary Fund, so I’ll defer to them).
These two beliefs made the situation look much more dire than I now believe it to be, but it’s still, I claim, worth entertaining as A Way This Whole Thing Could Go, especially if we’re hitting a capabilities plateau, and especially if we’re doubling down on government intervention as our key lever in obviating x-risk.
[I’m not advocating for a centrally planned automation schema, to be clear; I think these things have basically never worked, but would like to hear counterexamples. Maybe just like… a tax on automation to help staunch the flow of resources into the labs and their surrogates, a restructuring of unemployment benefits and retraining programs and, before any of that, a more robust effort to model the economic consequences of current and future systems than the IMF report that just duplicates the findings of some idiot (me) frantically reviewing BLS statistics in the winter of 2023.]
Sometimes people express concern that AIs may replace them in the workplace. This is (mostly) silly. Not that it won’t happen, but you’ve gotta break some eggs to make an industrial revolution. This is just ‘how economies work’ (whether or not they can / should work this way is a different question altogether).
The intrinsic fear of joblessness-resulting-from-automation is tantamount to worrying that curing infectious diseases would put gravediggers out of business.
There is a special case here, though: double digit unemployment (and youth unemployment, in particular) is a major destabilizing force in politics. You definitely don’t want an automation wave so rapid that the jobless and nihilistic youth mount a civil war, sharply curtailing your ability to govern the dangerous technologies which took everyone’s jobs in the first place.
As AI systems become more expensive, and more powerful, and pressure to deploy them profitably increases, I’m fairly concerned that we’ll see a massive hollowing out of many white collar professions, resulting in substantial civil unrest, violence, chaos. I’m not confident that we’ll get (i.e.) a UBI (or that it would meaningfully change the situation even if we did), and I’m not confident that there’s enough inertia in existing economic structures to soften the blow.
The IMF estimates that current tech (~GPT 4 at launch) can automate ~30% of human labor performed in the US. That’s a big, scary number. About half of these, they imagine, are the kinds of things you always want more of anyway, and that this complementarity will just drive production in that 15% of cases. The other 15%, though, probably just stop existing as jobs altogether (for various reasons, I think a 9:1 replacement rate is more likely than full automation, with current tech).
This mostly isn’t happening yet because you need an ML engineer to commit Serious Time to automating away That Job In Particular. ML engineers are expensive, and usually not specced for the kind of client-facing work that this would require (i.e. breaking down tasks that are part of a job, knowing what parts can be automated, and via what mechanisms, be that purpose-built models, fine-tuning, a prompt library for a human operator, some specialized scaffolding...). There’s just a lot of friction and lay-speak necessary to accomplish this, and it’s probably not economically worth it for some subset of necessary parties (ML engineers can make more elsewhere than small business owners can afford to pay them to automate things away, for instance).
So we’ve got a bottleneck, and on the other side of it, this speculative 15% leap in unemployment. That 15% potential leap, though, is climbing as capabilities increase (this is tautologically true; “drop in replacement for a remote worker” is one major criteria used in discussions about AI progress).
I don’t expect 15% unemployment to destabilize the government (Great Depression peak was 25%, which is a decent lower bound on ‘potentially dangerous’ levels of unemployment in the US). But I do expect that 15% powder keg to grow in size, and potentially cross into dangerous territory before it’s lit.
Previously, I’d actually arrived at that 30% number myself (almost exactly one year ago), but I had initially expected:
Labs would devote substantial resources to this automation, and it would happen more quickly than it has so far.
All of these jobs were just on the chopping block (frankly, I’m not sure how much I buy the complementarity argument, but I am An Internet Crank, and they are the International Monetary Fund, so I’ll defer to them).
These two beliefs made the situation look much more dire than I now believe it to be, but it’s still, I claim, worth entertaining as A Way This Whole Thing Could Go, especially if we’re hitting a capabilities plateau, and especially if we’re doubling down on government intervention as our key lever in obviating x-risk.
[I’m not advocating for a centrally planned automation schema, to be clear; I think these things have basically never worked, but would like to hear counterexamples. Maybe just like… a tax on automation to help staunch the flow of resources into the labs and their surrogates, a restructuring of unemployment benefits and retraining programs and, before any of that, a more robust effort to model the economic consequences of current and future systems than the IMF report that just duplicates the findings of some idiot (me) frantically reviewing BLS statistics in the winter of 2023.]