From what you write, Acemoglu’s suggestions seem unlikely to be very successful in particular given international competition. I paint a bit b/w, but I think the following logic remains salient also in the messy real world:
If your country unilaterally tries to halt development of the infinitely lucrative AI inventions that could automate jobs, other regions will be more than eager to accommodate the inventing companies. So, from the country’s egoistic perspective, might as well develop the inventions domestically and at least benefit from being the inventor rather than the adopter
If your country unilaterally tries to halt adoption of the technology, there are numerous capable countries keen to adopt and to swamp you with their sales
If you’d really be able to coordinate globally to enable 1. or 2. globally—extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement—then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the ‘bad automation’ to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn’t yet exist (or, for its next update..), we’d have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell’s calculation...?? So I don’t know how we’d in practice enforce non-automation. Just ‘it uses a large LLM’ feels weirdly arbitrary condition—though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the ‘bad automation’ to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn’t yet exist (or, for its next update..), we’d have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell’s calculation...?? So I don’t know how we’d in practice enforce non-automation. Just ‘it uses a large LLM’ feels weirdly arbitrary condition—though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Clearly, specific rule-based regulation is a dumb strategy. Acemoglu’s suggestions: tax incentives to keep employment and “labour voice” to let people decide in the context of specific company and job how they want to work with AI. I like this self-governing strategy. Basically, the idea is that people will want to keep influencing things and will resist “job bullshittification” done to them, if they have the political power (“labour voice”). But they should also have alternative choice of technology and work arrangement/method that doesn’t turn their work into rubber-stamping bullshit, but also alleviates the burden (“machine usefulness”). Because if they only have the choice between rubber-stamping bullshit job and burdensome job without AI, they may choose rubber-stamping.
If you’d really be able to coordinate globally to enable 1. or 2. globally—extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement—then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
If anything, this problem seems more pernicious wrt. climate change mitigation and environmental damage: it’s much more distributed, not only in US and China, but Russia and India are also big emitters, big leverage in Brazil, Congo, and Indonesia with their forests, overfishing and ocean pollution everywhere, etc.
With AI, it’s basically the question of regulating US and UK companies: EU is always eager to over-regulate relative to the US, and China is already successfully and closely regulating their AI for a variety of reasons (which Acemoglu points out). The big problem of the Chinese economy is weak internal demand, and automating jobs and therefore increasing inequality and decreasing the local purchasing power is the last thing that China wants.
But I should add, I agree that 1-3 poses challenging political and coordination problems. Nobody assumes it will be easy, including Acemoglu. It’s just another one in the row of hard political challenges posed by AI, along with the questions of “aligned with whom?”, considering/accounting for people’s voice past dysfunctional governments and political elites in general, etc.
From what you write, Acemoglu’s suggestions seem unlikely to be very successful in particular given international competition. I paint a bit b/w, but I think the following logic remains salient also in the messy real world:
If your country unilaterally tries to halt development of the infinitely lucrative AI inventions that could automate jobs, other regions will be more than eager to accommodate the inventing companies. So, from the country’s egoistic perspective, might as well develop the inventions domestically and at least benefit from being the inventor rather than the adopter
If your country unilaterally tries to halt adoption of the technology, there are numerous capable countries keen to adopt and to swamp you with their sales
If you’d really be able to coordinate globally to enable 1. or 2. globally—extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement—then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the ‘bad automation’ to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn’t yet exist (or, for its next update..), we’d have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell’s calculation...?? So I don’t know how we’d in practice enforce non-automation. Just ‘it uses a large LLM’ feels weirdly arbitrary condition—though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Clearly, specific rule-based regulation is a dumb strategy. Acemoglu’s suggestions: tax incentives to keep employment and “labour voice” to let people decide in the context of specific company and job how they want to work with AI. I like this self-governing strategy. Basically, the idea is that people will want to keep influencing things and will resist “job bullshittification” done to them, if they have the political power (“labour voice”). But they should also have alternative choice of technology and work arrangement/method that doesn’t turn their work into rubber-stamping bullshit, but also alleviates the burden (“machine usefulness”). Because if they only have the choice between rubber-stamping bullshit job and burdensome job without AI, they may choose rubber-stamping.
If anything, this problem seems more pernicious wrt. climate change mitigation and environmental damage: it’s much more distributed, not only in US and China, but Russia and India are also big emitters, big leverage in Brazil, Congo, and Indonesia with their forests, overfishing and ocean pollution everywhere, etc.
With AI, it’s basically the question of regulating US and UK companies: EU is always eager to over-regulate relative to the US, and China is already successfully and closely regulating their AI for a variety of reasons (which Acemoglu points out). The big problem of the Chinese economy is weak internal demand, and automating jobs and therefore increasing inequality and decreasing the local purchasing power is the last thing that China wants.
But I should add, I agree that 1-3 poses challenging political and coordination problems. Nobody assumes it will be easy, including Acemoglu. It’s just another one in the row of hard political challenges posed by AI, along with the questions of “aligned with whom?”, considering/accounting for people’s voice past dysfunctional governments and political elites in general, etc.