Note also that therapists are supposed to be trained not to say certain things and to talk a certain way. chatGPT unmodified can’t be relied on to do this. You would need to start with another base model and RLHF train it to meet the above and possibly also have multiple layers of introspection where every output is checked.
Basically you are saying therapy is possible with demonstrated AI tech and I would agree
It would be interesting if as a stunt an AI company tried to get their solution officially licensed, where only bigotry of “the applicant has to be human” would block it.
Note also that therapists are supposed to be trained not to say certain things and to talk a certain way. chatGPT unmodified can’t be relied on to do this. You would need to start with another base model and RLHF train it to meet the above and possibly also have multiple layers of introspection where every output is checked.
Basically you are saying therapy is possible with demonstrated AI tech and I would agree
It would be interesting if as a stunt an AI company tried to get their solution officially licensed, where only bigotry of “the applicant has to be human” would block it.