Therapy is already technically possible to automate with ChatGPT. The issue is that people strongly prefer to get it from a real human, even when an AI would in some sense do a “better” job.
Note also that therapists are supposed to be trained not to say certain things and to talk a certain way. chatGPT unmodified can’t be relied on to do this. You would need to start with another base model and RLHF train it to meet the above and possibly also have multiple layers of introspection where every output is checked.
Basically you are saying therapy is possible with demonstrated AI tech and I would agree
It would be interesting if as a stunt an AI company tried to get their solution officially licensed, where only bigotry of “the applicant has to be human” would block it.
Therapy is already technically possible to automate with ChatGPT. The issue is that people strongly prefer to get it from a real human, even when an AI would in some sense do a “better” job.
EDIT: A recent experiment demonstrating this: https://www.nbcnews.com/tech/internet/chatgpt-ai-experiment-mental-health-tech-app-koko-rcna65110
Note also that therapists are supposed to be trained not to say certain things and to talk a certain way. chatGPT unmodified can’t be relied on to do this. You would need to start with another base model and RLHF train it to meet the above and possibly also have multiple layers of introspection where every output is checked.
Basically you are saying therapy is possible with demonstrated AI tech and I would agree
It would be interesting if as a stunt an AI company tried to get their solution officially licensed, where only bigotry of “the applicant has to be human” would block it.