Baumol-effect jobs where it is essential (or strongly preferred) that the person performing the task is actually a human being. So: therapist, tutor, childcare, that sort of thing
Huh. Therapists and tutors seem automatable within a few years. I expect some people will always prefer an in-person experience with a real human, but if the price is too high, people are just going to talk to a language model instead.
However, I agree that childcare does seem like it’s the type of thing that will be hard to automate.
My list of hard to automate jobs would probably include things like: plumber, carpet installer, and construction work.
Therapy is already technically possible to automate with ChatGPT. The issue is that people strongly prefer to get it from a real human, even when an AI would in some sense do a “better” job.
Note also that therapists are supposed to be trained not to say certain things and to talk a certain way. chatGPT unmodified can’t be relied on to do this. You would need to start with another base model and RLHF train it to meet the above and possibly also have multiple layers of introspection where every output is checked.
Basically you are saying therapy is possible with demonstrated AI tech and I would agree
It would be interesting if as a stunt an AI company tried to get their solution officially licensed, where only bigotry of “the applicant has to be human” would block it.
Huh. Therapists and tutors seem automatable within a few years. I expect some people will always prefer an in-person experience with a real human, but if the price is too high, people are just going to talk to a language model instead.
However, I agree that childcare does seem like it’s the type of thing that will be hard to automate.
My list of hard to automate jobs would probably include things like: plumber, carpet installer, and construction work.
Therapy is already technically possible to automate with ChatGPT. The issue is that people strongly prefer to get it from a real human, even when an AI would in some sense do a “better” job.
EDIT: A recent experiment demonstrating this: https://www.nbcnews.com/tech/internet/chatgpt-ai-experiment-mental-health-tech-app-koko-rcna65110
Note also that therapists are supposed to be trained not to say certain things and to talk a certain way. chatGPT unmodified can’t be relied on to do this. You would need to start with another base model and RLHF train it to meet the above and possibly also have multiple layers of introspection where every output is checked.
Basically you are saying therapy is possible with demonstrated AI tech and I would agree
It would be interesting if as a stunt an AI company tried to get their solution officially licensed, where only bigotry of “the applicant has to be human” would block it.