Someone complained, in a meme, that tech companies building AI are targeting the wrong tasks: writing books, music, TV, but not the office drudge work, leading to a world in which the meaning-making creative pursuits are lost to humans. My reply to this is:
The order in which AI replaces jobs is discovered, not chosen. The problem is that most of the resources aren’t going into “AI for writing books” or “automating cubicle jobs”, they’re going into more-abstract targets like “scaling transformers” and “collecting data sets”.
How these abstract targets cash out into concrete tasks isn’t easy to predict in advance, and, for AI accelerationists, doesn’t offer many relevant degrees of freedom.
And, to the extent that money does go into these tasks per se, I’d bet that the spending is extremely imbalanced in the opposite way to what they assume: I’d bet way more money gets spent on tabular learning, ‘robotic process automation’, spreadsheet tooling, and so on than gets spent on Jukebox-like full music generation. (Certainly I skim a lot more of the former on Arxiv.) It’s telling that the big new music generation thing, almost 3 years after Jukebox is… someone jankily finetuning Stable Diffusion on ‘images’ of music lol. Not exactly what one would call an activefield of research.
So there is a relevant degree of freedom where you can ~A C C E L E R A T E~ - it’s just the wrong one from what they want.
Companies deploying AI to do their office work would be poised to make them take aligment in a very serious way. Office work being wrong in a slight but significant way could be easy to imagine the relevance off, hard to detect and possibly nightmare to recover from.
Someone complained, in a meme, that tech companies building AI are targeting the wrong tasks: writing books, music, TV, but not the office drudge work, leading to a world in which the meaning-making creative pursuits are lost to humans. My reply to this is:
The order in which AI replaces jobs is discovered, not chosen. The problem is that most of the resources aren’t going into “AI for writing books” or “automating cubicle jobs”, they’re going into more-abstract targets like “scaling transformers” and “collecting data sets”.
How these abstract targets cash out into concrete tasks isn’t easy to predict in advance, and, for AI accelerationists, doesn’t offer many relevant degrees of freedom.
And, to the extent that money does go into these tasks per se, I’d bet that the spending is extremely imbalanced in the opposite way to what they assume: I’d bet way more money gets spent on tabular learning, ‘robotic process automation’, spreadsheet tooling, and so on than gets spent on Jukebox-like full music generation. (Certainly I skim a lot more of the former on Arxiv.) It’s telling that the big new music generation thing, almost 3 years after Jukebox is… someone jankily finetuning Stable Diffusion on ‘images’ of music lol. Not exactly what one would call an active field of research.
So there is a relevant degree of freedom where you can ~A C C E L E R A T E~ - it’s just the wrong one from what they want.
Companies deploying AI to do their office work would be poised to make them take aligment in a very serious way. Office work being wrong in a slight but significant way could be easy to imagine the relevance off, hard to detect and possibly nightmare to recover from.