I’d be more interested in the in-between: what about cases where we don’t have general AI, but we have automation that drastically cuts jobs in a field, without causing counter-balancing wage increases or job creation in another field?
For instance, imagine the new technology is something really simple to manufacture (or worse, a new purpose for something we already manufacture en masse) — it’s so easy to produce these things, we don’t need really need to hire more workers, just push a couple levers and all the demand is met just like that.
Is there something interesting to be said about what happens then? Can this be modeled?
(In practice, even this is too extreme a scenario of course, everything sits on a continuum.)
Something more realistic, I think, is that even when a new useful machine and introduced, and the productivity of the producers of that machine shots up, the salaries of the machine-maker won’t shot up in a way that is proportional (maybe it’s easy to train people to make these machines?). And maybe the ratio skews: like automation will remove X people, and the increased demand for automation will get X/5 people hired. So on the one hand you get major job loss, and on the other a minor salary hike and minor job creation.
How to model what is lost here? Isn’t there some kind of conservation law and the surplus disappears somewhere (presumably in the pockets of the shareholders of both the companies buying and producing the machines?).
I’d be more interested in the in-between: what about cases where we don’t have general AI, but we have automation that drastically cuts jobs in a field, without causing counter-balancing wage increases or job creation in another field?
For instance, imagine the new technology is something really simple to manufacture (or worse, a new purpose for something we already manufacture en masse) — it’s so easy to produce these things, we don’t need really need to hire more workers, just push a couple levers and all the demand is met just like that.
Is there something interesting to be said about what happens then? Can this be modeled?
(In practice, even this is too extreme a scenario of course, everything sits on a continuum.)
Something more realistic, I think, is that even when a new useful machine and introduced, and the productivity of the producers of that machine shots up, the salaries of the machine-maker won’t shot up in a way that is proportional (maybe it’s easy to train people to make these machines?). And maybe the ratio skews: like automation will remove X people, and the increased demand for automation will get X/5 people hired. So on the one hand you get major job loss, and on the other a minor salary hike and minor job creation.
How to model what is lost here? Isn’t there some kind of conservation law and the surplus disappears somewhere (presumably in the pockets of the shareholders of both the companies buying and producing the machines?).