Re: population ethics, OK understood your position now. However, this post is not the right place to argue about it, and the reasoning in the post basically doesn’t depend on the outcome of this argument (you can think of the post taking “more people is better” population ethics as an an assumption rather than an inference).
You can’t make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc.
Policies and restrictions shouldn’t be very reliable to be largely effective. Being diagnosed by a psychiatrist with clinical depression is sufficiently burdensome that very few people will long AI relationships so much that will deliberately induce depression in themselves to achieve that (or bribe the psychiatrist). Black market for accounts… there is also a black market for hard drugs, which doesn’t mean that we should allow them, probably.
I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.
AI teachers and mentors are mostly possible on top of existing technology and lucrative (all people want high exam scores to enter good universities, etc.), and there are indeed many companies doing it (e.g., Khan Academy).
AI psychotherapists is more central to my thesis. I’ve considered starting such a project seriously a couple of months ago, discussed it with professional psychotherapists. There are two big clusters of issues, one technical, and another market/product issues.
Technical: I concluded that SoTA LLMs (GPT-4) are not basically capable yet of really “understanding” human psychology, and “seeing through” deception and non-obvious cues (“submerged part of the iceberg”), which a professional psychotherapist should be capable of doing. Also, any serious tool would need to integrate video/audio stream from the user and detect facial expressions and integrate this information with semantic context of the discussion. All this is maybe possible, with big investment, but it is very challenging and SoTA R&D. It’s not just “build something hastily on top of LLMs”. The AI partner that I projected in 2-3 years of now is also not that trivial to build, but even that is simpler than a reasonable AI psychotherapist. After all, it’s much easier to be an empathetic partner than a good psychotherapist.
More technical problem: absence of training data and no way to bootstrap it easily (unlike AI partner tech, which can bootstrap off interactions of their early users).
Market/product issue: any AI psychotherapist tool is destined to have awful user retention (unlike AI partners, of course). This tool will be in the “self-help” category, and they all have awful user retention (habit building apps, resolution/commitment apps, wellness apps).
On top of bad retention, the tool may not be very effective because users won’t have social or monetary incentives to take the therapy seriously enough. The point of AI psychotherapy is to un-bottleneck human therapists, the sessions with whom are expensive to most people, but on the other hand, this high price that people pay to psychotherapists and the sort of “social commitment” that they make in front of a real human makes people to stick with therapy and work on themselves rather than to drop therapy before seeing the results.
Re: population ethics, OK understood your position now. However, this post is not the right place to argue about it, and the reasoning in the post basically doesn’t depend on the outcome of this argument (you can think of the post taking “more people is better” population ethics as an an assumption rather than an inference).
Policies and restrictions shouldn’t be very reliable to be largely effective. Being diagnosed by a psychiatrist with clinical depression is sufficiently burdensome that very few people will long AI relationships so much that will deliberately induce depression in themselves to achieve that (or bribe the psychiatrist). Black market for accounts… there is also a black market for hard drugs, which doesn’t mean that we should allow them, probably.
AI teachers and mentors are mostly possible on top of existing technology and lucrative (all people want high exam scores to enter good universities, etc.), and there are indeed many companies doing it (e.g., Khan Academy).
AI psychotherapists is more central to my thesis. I’ve considered starting such a project seriously a couple of months ago, discussed it with professional psychotherapists. There are two big clusters of issues, one technical, and another market/product issues.
Technical: I concluded that SoTA LLMs (GPT-4) are not basically capable yet of really “understanding” human psychology, and “seeing through” deception and non-obvious cues (“submerged part of the iceberg”), which a professional psychotherapist should be capable of doing. Also, any serious tool would need to integrate video/audio stream from the user and detect facial expressions and integrate this information with semantic context of the discussion. All this is maybe possible, with big investment, but it is very challenging and SoTA R&D. It’s not just “build something hastily on top of LLMs”. The AI partner that I projected in 2-3 years of now is also not that trivial to build, but even that is simpler than a reasonable AI psychotherapist. After all, it’s much easier to be an empathetic partner than a good psychotherapist.
More technical problem: absence of training data and no way to bootstrap it easily (unlike AI partner tech, which can bootstrap off interactions of their early users).
Market/product issue: any AI psychotherapist tool is destined to have awful user retention (unlike AI partners, of course). This tool will be in the “self-help” category, and they all have awful user retention (habit building apps, resolution/commitment apps, wellness apps).
On top of bad retention, the tool may not be very effective because users won’t have social or monetary incentives to take the therapy seriously enough. The point of AI psychotherapy is to un-bottleneck human therapists, the sessions with whom are expensive to most people, but on the other hand, this high price that people pay to psychotherapists and the sort of “social commitment” that they make in front of a real human makes people to stick with therapy and work on themselves rather than to drop therapy before seeing the results.