[Question] What Other Lines of Work are Safe from AI Automation?

TL;DR: Post-AGI career advice needed (asking for a friend).

Let’s assume, for the sake of discussion, that Leopold Aschenbrenner is correct that at some point in the fairly near future (possibly even, as he claims, this decade) AI will be capable of acting as a drop-in remote worker as intelligent as the smartest humans and capable of doing basically any form of intellectual work that doesn’t have in-person requirements, and that it can do so as well or better than pretty-much all humans, plus that it’s at least two or three orders of magnitude cheaper than current pay for intellectual work (so at least an order of magnitude cheaper than a subsistence income) — and probably decreasing in cost as well.

Let’s also assume for this discussion that at some point after that (perhaps not very long after that, given the increase in capacity to do intellectual work on robotics), that developments in robotics overcome Moravec’s paradox, and mass production of robots greatly decreases their cost, to the point where a robot (humanoid or otherwise) can do basically every job that requires manual dexterity, hand-eye-coordination, and/​or bodily agility, again for significantly less than a human subsistence wage. Let’s further assume that some of the new robots are now well-waterproofed, so that even plumbers, lifeguards, and divers are out of work, and also that some of them can be made to look a lot like humans, for tasks where that appearance is useful or appealing.

I’d also like to also assume for this discussion that the concept of a “human job” is still meaningful, thus the human race doesn’t go extinct or get entirely disempowered, and that we don’t to any great extent merge with machines: some of us may get very good at using AI-powered tools or collaborating with AI co-workers, but we don’t effectively plug AI in as a third hemisphere of our brain to the point where it dramatically increases our capabilities.

So, under this specific set of assumptions, what types of paying jobs (other than being on UBI) will then still be available to humans, even if only to talented ones? How long-term are the prospects for these jobs (after the inevitable economic transition period)?

[If you instead want to discuss the probability/​implausibility/​timelines of any or all of my three assumptions, rather than the economic/​career consequences if all three of them occurred, then that’s not an answer to my question, but it is a perfectly valid comment, and I’d love to discuss that in the comments section.]

So the criterion here is basically “jobs for which being an actual real human is a prerequisite”.

Here are the candidate job categories I’ve already thought of:

(This is my original list, plus few minor edits: for a list significantly revised in light of all the discussion from other people’s answers and comments, see my answer below.)

  1. Doing something that machines can do better, but that people are still willing to pay to watch a very talented/​skilled human do about as well as any human can (on TV or in person).

    Examples: chess master, Twitch streamer, professional athlete, Cirque du Soleil performer.

    Epistemic status: already proven for some of these, the first two are things that machines have already been able to do better than a human for a while, but people are still interested in paying to watch a human do them very well for a human. Also seems very plausible for the others that current robotics is not yet up to doing better.

    Economic limits: If you’re not in the top O(1000) people in the world at some specific activity that plenty of people in the world are interested in watching, then you can make roughly no money off this. Despite the aspirations of a great many teenaged boys, being an unusually good (but not amazing) video gamer is not a skill that will make you any money at all.

  2. Doing some intellectual and/​or physical work that AI/​robots can now do better, but for some reason people are willing to pay at least an order of magnitude more to have it done less well by a human, perhaps because they trust humans better. (Could also be combined with item 3. below.)

    Example: Doctor, veterinarian, lawyer, priest, babysitter, nurse, primary school teacher.

    Epistemic status: Many people tell me “I’d never let an AI/​a robot do <high-stakes intellectual or physical work> for me/​my family/​my pets…” They are clearly quite genuine in this opinion, but it’s unclear how deeply they have considered the matter. It remains to be seen how long this opinion will last in the presence of a very large price differential when the AI/​robot-produced work is actually, demonstrably, just as good if not better.

    Economic limits: I suspect there will be a lot of demand for this at first, and that it will decrease over time, perhaps even quite rapidly. Requires being reliably good at the job, and at appearing reassuringly competent while doing so.

    I’d be interested to know if people think there will be specific examples of this that they believe will never go away, or at least will take a very long time to go away? (Priest is my personal strongest candidate.)

  3. Giving human feedback/​input/​supervision to/​of AI/​robotic work/​models/​training data, in order to improve, check, or confirm its quality.

    Examples: current AI training crowd-workers, wikipedian (currently unpaid), acting as a manager or technical lead to a team of AI white collar workers, focus group participant, filling out endless surveys on the fine points of Human Values

    Epistemic status: seems inevitable, at least at first.

    Economic limits: I imagine there will be a lot of demand for this at first, I’m rather unsure if that demand will gradually decline, as the AIs get better at doing things/​self-training without needing human input, or if it will increase over time because the overall economy is growing so fast and/​or more capable models need more training data and/​or society keeps moving out-of-previous distribution. [A lot of training data is needed, more training data is always better, and the resulting models can be used a great many times, however there is clearly an element of diminishing returns on this as more data is accumulated, and we’re already getting increasingly good at generating synthetic training data.]

  4. In-person sex work where the client is willing to pay a (likely order-of-magnitude) premium for a real human provider.

    Epistemic status: human nature.

    Economic limits: Requires rather specific talents.

  5. Providing some nominal economic value while being a status symbol, where the primary point is to demonstrate that the employer has so much money they can waste some of it on employing a real human (“They actually have a human maid!”)

    Examples: (status symbol) receptionist, maid, personal assistant

    Epistemic status: human nature (assuming there are still people this unusually rich).

    Economic limits: There are likely to be relatively few positions of this type, at most a few per person so unusually rich that they feel a need to show this fact off. (Human nobility used to do a lot of this, centuries back, but there the servants were supplying real, significant economic value, and the being-a-status-symbol component of it was mostly confined to the uniforms the servant swore while doing so.) Requires rather specific talents, including looking glamorous and expensive, and probably also being exceptionally good at your nominal job.

  6. Providing human-species-specific reproductive or medical services.

    Examples: Surrogate motherhood, wet-nurse, sperm/​egg donor, blood donor, organ donor.

    Epistemic status: still needed.

    Economic limits: Significant medical consequences, low demand, improvements in medicine may reduce demand.

So, what other examples can people think of?

One category that I’m personally really unsure about the long-term viability of is being an artist/​creator/​influencer/​actor/​TV personality. Just being fairly good at drawing, playing a musical instrument/​other creative skills is clearly going to get automated out of having any economic value, and being really rather good at it is probably going to turn into “your primary job is to create more training data for the generative algorithms”, i.e. become part of item 3. above. What is less clear to me is whether (un-human-assisted) AIs will ever become better than world class humans (who are using AI tools and/​or with AI coworkers) for the original creativity aspect this sort of stuff (they will, inevitably, get technically better at performing it than unassisted humans), and if they do, to what extent/​for how long people will still want content from an actual human instead, just because it’s from a human, even if it’s not as good (thus this making this another example of either item 1. or 5. above).