I’m still watching this (it’s interesting, but 6 hours long!), and will have more comments later.
From his point of view in what I’ve watched so far, what matters most about the categories of jobs above is to what extent they are critical to the AI/robotic economic growth and could end up being a limiting factor bottleneck on it.
My categories 1. and 4.–6. (for both the original version of 4. and the expanded v2 version in a comment) are all fripperies: if these jobs went entirely unfilled, and the demand for them unfulfilled, the humans would be less happy (probably not by that much), but the AI/robotic economic growth would roar on unabated. For category 2., this could matter, but in for this category AI/bots can doer the job, consumers just strongly prefer a human doing it. So a shortage of humans willing to do these compared to demand would increase the price differential between a human and AI providerl, and sooner or later that would reach the differential where people are willing to go with the AI option, demand would decrease, and AI/bots would fill the gap and do a good job of it. So this effect is inherently self-limiting, cannot become too much of a bottleneck, and I can’t see it being a brake on AI/robotic economic growth rates:
The glaring exception to this is my category 3.: providing more data about human preferences. This is something that the AIs genuinely, fundamentally need (if they’re aligned — a paperclip maximizer wouldn’t need this). Apart from the possibility of replacing/substituting the data with things like AI synthetic data, AI historical or scientific research into humans that requires no actual human participation to generate data (or that is disguised as video games or a virtual environment, for example, but that’s just using a free-entertainment motivation to participate, rather than a salary, so economically it’s not that different), it’s a vital economic input from the humans into the AI/robotic sector of the economy, and if it became too expensive, it could actually become a bottleneck/limiting factor in the post-AGI economy.
So that means that, for predicting an upper bound on FOOM growth rates, understanding how much and for how long AI needs human data/input/feedback of the type that jobs in category 3. generates, whether this demand decreases or increases over time, and to what extent functionally equivalent data could by synthesized or researched without willing human involvement, is actually a really important question. This could in fact be the Baumol Effect bottleneck that Carl Shulman has been looking for but hadn’t found: AI’s reliance on (so far, exponentially increasing amounts of) data about humans that can only come from humans.
The recent Carl Shulman podcast (part 1, part 2) is informative on this question (though it should be taken in the spirit of exploratory engineering, not forecasting). In particular, in a post-AGI magically-normal world, jobs that humans are uniquely qualified to do won’t be important to the industry and will be worked around. What remains of them will have the character of billionaires hiring other billionaires as waiters, so treating this question as being about careers seems noncentral.
I’m still watching this (it’s interesting, but 6 hours long!), and will have more comments later.
From his point of view in what I’ve watched so far, what matters most about the categories of jobs above is to what extent they are critical to the AI/robotic economic growth and could end up being a limiting factor bottleneck on it.
My categories 1. and 4.–6. (for both the original version of 4. and the expanded v2 version in a comment) are all fripperies: if these jobs went entirely unfilled, and the demand for them unfulfilled, the humans would be less happy (probably not by that much), but the AI/robotic economic growth would roar on unabated. For category 2., this could matter, but in for this category AI/bots can doer the job, consumers just strongly prefer a human doing it. So a shortage of humans willing to do these compared to demand would increase the price differential between a human and AI providerl, and sooner or later that would reach the differential where people are willing to go with the AI option, demand would decrease, and AI/bots would fill the gap and do a good job of it. So this effect is inherently self-limiting, cannot become too much of a bottleneck, and I can’t see it being a brake on AI/robotic economic growth rates:
The glaring exception to this is my category 3.: providing more data about human preferences. This is something that the AIs genuinely, fundamentally need (if they’re aligned — a paperclip maximizer wouldn’t need this). Apart from the possibility of replacing/substituting the data with things like AI synthetic data, AI historical or scientific research into humans that requires no actual human participation to generate data (or that is disguised as video games or a virtual environment, for example, but that’s just using a free-entertainment motivation to participate, rather than a salary, so economically it’s not that different), it’s a vital economic input from the humans into the AI/robotic sector of the economy, and if it became too expensive, it could actually become a bottleneck/limiting factor in the post-AGI economy.
So that means that, for predicting an upper bound on FOOM growth rates, understanding how much and for how long AI needs human data/input/feedback of the type that jobs in category 3. generates, whether this demand decreases or increases over time, and to what extent functionally equivalent data could by synthesized or researched without willing human involvement, is actually a really important question. This could in fact be the Baumol Effect bottleneck that Carl Shulman has been looking for but hadn’t found: AI’s reliance on (so far, exponentially increasing amounts of) data about humans that can only come from humans.