A thought on AI unemployment and its consequences
I haven’t given much thought to the concept of automation and computer induced unemployment. Others at the FHI have been looking into it in more details—see Carl Frey’s “The Future of Employment”, which did estimates for 70 chosen professions as to their degree of automatability, and extended the results of this using O∗NET, an online service developed for the US Department of Labor, which gave the key features of an occupation as a standardised and measurable set of variables.
The reasons that I haven’t been looking at it too much is that AI-unemployment has considerably less impact that AI-superintelligence, and thus is a less important use of time. However, if automation does cause mass unemployment, then advocating for AI safety will happen in a very different context to currently. Much will depend on how that mass unemployment problem is dealt with, what lessons are learnt, and the views of whoever is the most powerful in society. Just off the top of my head, I could think of four scenarios on whether risk goes up or down, depending on whether the unemployment problem was satisfactorily “solved” or not:
AI risk\Unemployment | Problem solved | Problem unsolved |
---|---|---|
Risk reduced |
With good practice in dealing with AI problems, people and organisations are willing and able to address the big issues. |
The world is very conscious of the misery that unrestricted AI research can cause, and very wary of future disruptions. Those at the top want to hang on to their gains, and they are the one with the most control over AIs and automation research. |
Risk increased |
Having dealt with the easier automation problems in a particular way (eg taxation), people underestimate the risk and expect the same solutions to work. |
Society is locked into a bitter conflict between those benefiting from automation and those losing out, and superintelligence is seen through the same prism. Those who profited from automation are the most powerful, and decide to push ahead. |
But of course the situation is far more complicated, with many different possible permutations, and no guarantee that the same approach will be used across the planet. And let the division into four boxes not fool us into thinking that any is of comparable probability to the others—more research is (really) needed.
Here’s a video on AI job automation, intended to be accessible to a nontechnical audience, but still interesting:
http://qz.com/250154/still-think-robots-cant-do-your-job-this-video-may-change-your-mind/
I saw that video from an entirely different site shortly before reading this article for the first time. I think it’s a fairly good video and would recommend it to people that have 15 minutes.
Yep, that’s partially what inspired me to think about this in the first place.
This video is a polished example of the Luddite fallacy.
Are things “different this time”? As long as we haven’t created godlike AI, humans will still have a comparative advantage in something.
For the last two centuries people have been warning that automation will cause long-term unemployment, and they have been wrong every time, and hopefully they will continue to be wrong. But the horse analogy used in the video (and also by Gregory Clark in A Farewell to Alms) is what convinced me that things might indeed be different this time.
But is it clear that automation hasn’t caused long-term unemployment?
Something that occurs to me when reading this comment that I’m now considering, that isn’t necessarily related to this comment directly:
Automation doesn’t actually have to be a sole cause of long term unemployment problems for it to be problematic. If Automation just slows the rate at which reemployment occurs after something else (perhaps a recession) causes the unemployment problem, that would still be problematic.
For instance, if we don’t recover to the pre-recession peak of employment before we have a second recession, and we don’t recover to the pre-second recession peak of employment before we have a third recession.… That would be a downward spiral in employment with large economic effects, and every single one of the sudden downward drops could be caused by recessions, with the automation just hampering reemployment.
I’m kind of surprised I didn’t think of something like this before, because it sounds much more accurate than my previous thinking. Thank you for helping me think about this.
I’d say it’s at least clear that so far automation has caused little to no long-term unemployment. (Again, the industrial revolution started a couple centuries ago and yet there are still jobs.) Generally what happens is:
New automation is introduced that allows widgets to be made at lower labor costs.
Many people in the widget industry lose their jobs. (Which is no fun for them.)
Widgets are now cheaper, which means people who buy widgets can now afford more of them (somewhat ameliorating the unemployment in the widget industry) or spend their extra money on other things (meaning more employment is available in other industries for the former widget makers).
Edit: Take agriculture as an example. Wikipedia says that about a billion people (1/7 of the world’s population and “over 1⁄3 of the available work force”) currently work in agriculture. That article doesn’t give versions of this worldwide number for previous points in history, but it does say, “During the 16th century in Europe, for example, between 55 and 75 percent of the population was engaged in agriculture, depending on the country. By the 19th century in Europe, this had dropped to between 35 and 65 percent. In the same countries today, the figure is less than 10%.”
That doesn’t follow. It may mean that more jobs are available in other industries but these jobs require skills that the former widget makers don’t have, thus creating an imbalance where one area of the economy has reduced unemployment and one has increased unemployment.
Furthermore, it doesn’t mean that the number of jobs is balanced out, but the amount of money. It is entirely possible that three jobs worth $X are lost among widget makers and one job worth $3X is created in the other area of the economy.
It looks like you’re arguing against empirical reality. The Industrial Revolution did happen. It did not lead to massive unemployment.
The Industrial Revolution led to the creation of new jobs that were still low skill jobs. Because they were low skill jobs, people displaced from other jobs would be able to take the new jobs. The fact that the new jobs were low skill jobs was good luck; it’s not a general characteristic of jobs that are created by technological advance, and there’s no reason to expect that other changes should also create jobs that are sufficiently low-skill that the displaced workers can take them.
Um, I don’t think that’s true. I am not sure what do you mean by “low skill”, though. To clarify, let me make my statement stronger: up till now technological progress did not lead to massive unemployment. Do you think that new jobs created during, say, the last 30 years are “still low skill jobs”?
I don’t know a single example of technological progress starting from early metallurgy (bronze, etc.) which led to massive unemployment. Do you really think it’s all just good luck?
It’s good luck in the sense that it’s accidental and not essential to technological progress. It’s not good luck in the sense of being based on a random element that could have turned out another way.
I suggest that earlier forms of technological progress shift the available jobs in a way which increases the level of skill needed for the new jobs (in comparison to the old jobs) by much less than later forms of technological progress do.
At this moment, I think the greatest advantage of humans with low intelligence is that they are relatively flexible, “easy to program”, and come with built-in heuristics for unexpected situations. By which I mean that they can easily walk across your factory or shop; and you don’t have to be a computer programmer to explain them you want them to pick boxes from one place and move them to other places sorting them by color. And in case of fire (which you forgot to mention explicitly during their job training), instead of quietly continuing to move the boxes until everything burns, they would call for help.
Give me reasonably cheap robots with these skills, and I think some people will have no economical comparative advantage left. Getting from there to replacing an average programmer would probably be a shorter distance than getting from zero to there.
I agree with you that automated processes will eventually have an absolute advantage in all areas of productivity. However, humans only need a comparative advantage to be employable. The theory of comparative advantage is a “powerful yet counter-intuitive insight in economics” and I recommend checking it out. Ricardo’s example is especially instructive, link is below.
http://en.wikipedia.org/wiki/Comparative_advantage#Ricardo.27s_Example
Imagine Portugal is a robot, and England is a human.
I am not sure this analogy will work.
As an extreme example, today a computer processor can calculate one addition in 1 nanosecond, and one multiplication also in 1 nanosecond. A human can calculate one addition in 10 seconds, and one multiplication in 100 seconds (multiple-digit integers).
Taking the law of comparative advantages too literally, if I have a computer, I should be able to trade with people, offering to multiply integers for them, if they will add integers for me, for a ratio of e.g. 3 multiplications for 1 addition. They should profit, because instead of spending 100 seconds doing one multiplication, they only need to spend 30 seconds doing three additions for me, and then I will do the multiplication for them. I should profit, because instead of wasting 3 nanoseconds for three additions, I only need to spend 1 nanosecond for one multiplication.
But in real life it wouldn’t work for the obvious reasons (transaction costs being a few magnitudes higher than any possible profit).
This is a silly example mostly proving that even the comparative advantages are not guaranteed to save the day. If the difference between robots and humans becomes too large, the costs of having to deal with humans will outweigh the possible gains from trade.
Upon review of all these comments, I was excited to craft a reply. So I started describing the theoretical reasoning and supporting empirical evidence for the theory of comparative advantage and its labor market interpretation. I also researched historical unemployment rates during periods of great technological progress. After a short while, a much more profitable idea dawned on me. Introducing…
Mac’s Unemployment Nightmare Insurance Policy
Worried that technological advances will leave you jobless? Hedge that risk!
Confident that automation will wreak havoc on the economy? Put your money where your mouth is!
For an annual premium that’s just a fraction of your benefit, Mac promises to pay out when the U.S. unemployment rate exceeds 35%. That’s a whole 10% lower than the worst case scenario in the Quartz video, so what are you waiting for? Call us today! 1-800-MAC-RICH
Hm… why would I buy technological-unemployment insurance? If I’m understanding the theories right, in such a scenario, income/returns to equities should increase markedly. So wouldn’t I be better off taking my premiums and buying stock market indexes? Unlike other scenarios like life insurance (where death could strike at any time and so self-insurance can be a bad idea), everyone seems to agree that there’s not going to be any such spikes within the next, say, 10 years, and that allows for a decent nest-egg to be built up.
Equities are not guaranteed to hedge this risk. Equity total returns are influenced by many factors, including: interest rates, valuation metrics, economic sensitivity, inflation, the tax regime...and on and on. Moreover, tons of research has shown that major equity indexes incorporate relevant information into their prices very quickly, so it is unlikely that you know something the market does not (see Efficient Market Hypothesis).
I’ll expect your call 10 years from now.
Sure, but so is your insurance fund. Worse, actually, since if you structure your investments wrong you may go flat bankrupt, which would be pretty much impossible if I’m holding indices.
Yes, but that’s irrelevant. In this scenario, I’m insuring, not investing. I don’t care about average or risk-adjusted returns or stuff like that, I care only that in those states of the world where there is severe technological unemployment likely affecting me, I have assets of value. So the question is, in technological unemployment scenarios (whatever their probability, howsoever they are priced into the efficient market) would my equities be worth more? I think they would.
I dunno, so far I’m not impressed by your prospectus. :)
Mac, I think you may be underestimating the level of knowledge of the other commenters here. It’s not like we haven’t heard of David Ricardo or of the EMH.
What fraction?
How about for every year from now until either of us dies I give you $1k if the US unemployment is below 35% and you give me $20k otherwise?
That would be a pretty bad deal for you.
Deal! Tell your friends too!
Note to SEC: This isn’t real.
In Ricardo’s example, England and Portugal already exist and will not change significantly. The number of producers is basically constant.
In the human/robot example, given such a huge absolute advantage to the robot, we would just build more robots. That would be vastly more economical than having humans do the things that the bots are much better at, but isn’t their highest aptitude.
The question is not whether we hold a comparative advantage, but whether that advantage will remain large enough to pay for, e.g., food.
Things are always “different this time” ;-)
More seriously, employment and unemployment rates are complex and dependent on what kind of macro-economics you use, and there are some impacts of government interventions, etc...
But this change can have a strong effect even outside the employment rate, which is what we are seeing: the wages of the low skilled and those who cannot retrain are plummeting, and we’re seeing a division into “lovely” versus “lousy” jobs (eg: http://eprints.lse.ac.uk/20002/1/Lousy_and_Lovely_Jobs_the_Rising_Polarization_of_Work_in_Britain.pdf ).
Sure!
http://lesswrong.com/lw/9bt/what_jobs_are_safe_in_an_automated_future/5nlb
Indeed, the fact that we can tell an equally palusible-sounding story (the contents of each table cell) for opposite outcomes should show that the intuitive plausibility or ability to tell a story about an outcome is worthless evidence of its probability. We should demand very strong proof before trusting a model that tells us opposite outcomes are roughly equally likely.
Especially when it took me about a minute to come up with all four stories...
Some folks keep suggesting that AI unemployment applies to low IQ people. But if your first link (p. 24-7) is right, it may be better to say it applies to those with low manual dexterity, low social intelligence, and/or low IQ or creative intelligence.
Indeed! Insurance underwriters (a highly paid position) are ranked as highly likely to be replaced by automation… And cleaners are hanging on pretty well, in contrast.
Bartenders, waiters and restaurant cooks very likely to be automated away? What the heck?
Automation need not be a perfect substitute (hence need not possess the same skills as those it replaces). Automated ordering systems at tables and automatic cooking systems seem very plausible.
What is amazing is that computers have not already reduced the workforce to run bureaucracies.
In my upcoming book I analyze the Australian Tax Office in 1955 (when Parkinson wrote is great paper) and 2008. At both times it took about 1.5% of GDP to do essentially the same function. (Normalizing for GDP takes into account inflation and population size.)
Back in 1955 tax returns were largely processed by hand, by rows of clerks with fountain pens. Just one ancient mainframe could do the work of thousands of people. Today few returns are even touched by a human hand.
The steam tractor and the combine harvester have reduced the agricultural work force from 80% of the population to less than 20%, depending how you count. But the huge increase in the power of bureaucratic tools has produced no reduction in the proportion of the population that work in bureaucracies, quite the opposite.
People complain about increased regulation nowadays—are these bureaucrats managing more things than before?
I have trouble understanding what is the problem.
Humans are not neccesary for work but we still command enourmous economic power. It’s not like we are going to be poor. So we might have something like 1 working day and 6 days off in a week.
In the video wasn’t it a happy ending for the horses? Majority of horses are in a life-long pension. I we can get stuff without working, why insist to work?
Unless those robots will belong to you, or the money from the owners of the robots will somehow get to you (e.g. taxation and basic income), you are going to be poor when you are no longer able to compete with the robots.
By the way, there will also be a robotic police and army, able to protect the system regardless of how dissatisfied how many humans become.
Okay answers the question. So while we as a society can be rich and “in average” would be better off, the distribution of wealth can still be problematic. It would still not seem to be about computers per se. But it would seem that computers would build up pressure to solve it.
What I find different in this case compared to hard-mode multiple times rediscussed topic on different economic structures for different sections of the population is that computers can potentially operate unmanned. Thus there is no economic necessity to enter into arrangement for the side that has the latest tech and the larger population. Thus a whole lot of humans do not participate. This might mean that instead of suppression, exclusion and isolation would be employed by the robotic police. The haves and have-nots would form separate cities that would not trade with each other (but would probably trade within themselfs and the same kind). While haves would like to have markets for their products the have-nots don’t have anything worthwhile to offer back (as even total economic submission: slavery would not be enough). Not an especially happy outcome, but the standard of living in the have-not side need not be lower than what we currently have (even if the advancement of it would be frozen).
One day the robot owners may decide they want to take the land of the non-owners, or polute the air or water… and there is no economical pressure to stop them.
As in the debates about AI, malice is not required here, only indifference.
It strikes me that this might already be happening to an extent in the form of getting favourable access to natural resources. Also within the countries that get their income mostly from exporting natural resources the living standard of people irrelevant to the value extraction process doesn’t develop. Thus you have oil sheiks in a country riddled with poverty.
If the politics shakes out well, yes. If it shakes out badly...
I just want to point out something I thought was a little funny: people keep imagining a situation where first IQ 80 people are out of a job, then IQ 100, then IQ 120 and then everyone. I don’t get why. IQ depends on a large number of different mental modules. It’s the modules that we might expect to be successively automated and there’s no a priori reason why the modules IQ 80 people use in their jobs should be very susceptible to automation. Now it may so happen these susceptible modules are most used by IQ 80 folks, as your first link implies, but that would be coincidence. So my guess is that IQ is clumsy.