Commenting on the basis of lessons from some experience doing UBI analysis for Switzerland/Europe:
The current systems has various costs (time and money, but maybe more importantly, opportunities wasted by perverse incentives) associated with proving that you are eligible for some benefit.
On the one hand, yes, and its a key reason why NIT/UBI systems are often popular on the right; even Milton Friedman already advocated for a NIT. That said, there are also discussions that suggest the poverty trap—i.e. overwhelmingly strong labor disincentives for poor, from outrageously high effective marginal tax rates from benefits fade-out/tax kicking-in—may be partly overrated, so smoothing the earned-to-net income function may not help as much as some may hope. And, what tends to be forgotten, is that people with special needs may not be able to live purely from a UBI, so not all current social security benefit mechanisms can usually be replaced by a standard UBI.
On the other hand, once you have a conditional welfare system that does not have crazily strong/large poverty traps, labor incentives might overall still be mostly stronger than under a UBI (assumed sufficiently generous to allow a reasonable life from it), once you also take into account the high marginal tax rates required to finance that UBI. This seems to hold in even relatively rich countries (we used to calculate it for Switzerland).
Of course, with AI Joblessness all this might change anyway, in line with the underlying topic of the post here.
Plus you need to pay the people who verify all this evidence.
This tends to be overrated; when you look at the stats, this staff cost is really small compared to the total traditional social security or the UBI costs (we looked at #s in Switzerland but I can only imagine it’s exactly similar orders of magnitudes in other developed countries).
From what you write, Acemoglu’s suggestions seem unlikely to be very successful in particular given international competition. I paint a bit b/w, but I think the following logic remains salient also in the messy real world:
If your country unilaterally tries to halt development of the infinitely lucrative AI inventions that could automate jobs, other regions will be more than eager to accommodate the inventing companies. So, from the country’s egoistic perspective, might as well develop the inventions domestically and at least benefit from being the inventor rather than the adopter
If your country unilaterally tries to halt adoption of the technology, there are numerous capable countries keen to adopt and to swamp you with their sales
If you’d really be able to coordinate globally to enable 1. or 2. globally—extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement—then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the ‘bad automation’ to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn’t yet exist (or, for its next update..), we’d have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell’s calculation...?? So I don’t know how we’d in practice enforce non-automation. Just ‘it uses a large LLM’ feels weirdly arbitrary condition—though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.