You seem to assume monogamy in the sense that a person needs to break up with an AI partner to develop another relationship with a human. I think that would be unlikely.
AI partner is recommended to a person by a psychotherapist for some other reason, such as the person has a severe defect in their physical appearance or a disability and the psychotherapist sees that the person doesn’t have psychological resources or a willingness to deal with their very small chances of finding a human partner (at least before the person turns 30 years old, at which point the person could enter a relationship with an AI anyway), or because they have a depression or a very low self-esteem and the psychotherapist thinks the AI partner may help the person to combat with this issue, etc.
The base rate for depression alone among 12-17-year-olds alone is 20%. A company that sells an AI partner would likely be able to optimize in a way that it helps with depression and run a study to prove it.
In the regulatory environment that you propose, that means that a sizeable number of those teenagers who are most vulnerable to begin with are still able to access AI partners.
You seem to assume monogamy in the sense that a person needs to break up with an AI partner to develop another relationship with a human. I think that would be unlikely.
Well, you think it’s unlikely, I think it will be the case for 20-80% people in AI relationships (wide bounds because I’m not an expert). How about AI romance startups proving this is at least as “unlikely” that 90+% people could “mix” human and AI romance without issue, on long-term psychological studies? FDA demands drug companies to prove long-term safety of new medications, why we don’t hold technology which will obviously intrude in human psychology to the same standard?
The base rate for depression alone among 12-17-year-olds alone is 20%. A company that sells an AI partner would likely be able to optimize in a way that it helps with depression and run a study to prove it.
You are arguing with the proposed policy which is not even here yet. I think barring clinically depressed people from AI romance is a much weaker case and I’m not ready to defend it here. And even if it would be a mistake to give depressed people access to AI partners, just allowing anyone over 18 to use AI partners is a bigger mistake, anyway, as a matter of simple logic, because “anyone” includes “depressed people”.
You seem to assume monogamy in the sense that a person needs to break up with an AI partner to develop another relationship with a human. I think that would be unlikely.
The base rate for depression alone among 12-17-year-olds alone is 20%. A company that sells an AI partner would likely be able to optimize in a way that it helps with depression and run a study to prove it.
In the regulatory environment that you propose, that means that a sizeable number of those teenagers who are most vulnerable to begin with are still able to access AI partners.
Well, you think it’s unlikely, I think it will be the case for 20-80% people in AI relationships (wide bounds because I’m not an expert). How about AI romance startups proving this is at least as “unlikely” that 90+% people could “mix” human and AI romance without issue, on long-term psychological studies? FDA demands drug companies to prove long-term safety of new medications, why we don’t hold technology which will obviously intrude in human psychology to the same standard?
You are arguing with the proposed policy which is not even here yet. I think barring clinically depressed people from AI romance is a much weaker case and I’m not ready to defend it here. And even if it would be a mistake to give depressed people access to AI partners, just allowing anyone over 18 to use AI partners is a bigger mistake, anyway, as a matter of simple logic, because “anyone” includes “depressed people”.