Tl;dr is that your argument doesn’t meaningfully engage the counterproposition, and I think this not only harms your argument, but severely limits the extent to which the discussion in the comments can be productive. I’ll confess that the wall of text below was written because you made me angry, not because I’m so invested in epistemic virtue—that said, I hope it will be taken as constructive criticism which will help the comments-section be more valuable for discussion :)
Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities). Your proposal also requires a metric of justification (i.e. “X is good” is typically insufficient to entail “the government should do X”—more is needed ). I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people—if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made. If this restriction is ultimately an acceptable cost, why does the state—in particular—have the right / obligation to enforce this, as opposed to leaving it to individual good judgement / families / community / etc? (though you address this in some of the comments, you really only address the “worst-case” scenario; see my next remark)
You need more nuanced characterizations, and a comparison of the different types of outcomes you could expect to see. While you take a mainline case with respect to technology, your characterization of “who is seeking AI romantic partners and why” is, with respect, very caricatured—there’s no real engagement with the cases your contralocutor would bring up—what about the former incel who is now happily married to his AI gf? Of the polycule that has an AI just because they think its neat? Is this extremely unlikely in your reasoning? Relatively unimportant? More engagement with the cases inconvenient to your argument would make it much stronger (and incidentally tends to make it sound more considerate).
You should be more comparative: compare your primary impacts with other possible concerns in terms of magnitude, group-of-incidence (i.e. is it harming groups to whom society has incurred some special obligation to protect?), and severity. You also need a stronger support for some assertions on which your case hinges: why should it be so much more likely that an AI is going to fall far short of a human partner? If that’s not true, how important is this compared to your points on fertility? How should these benefits be compared to what a contralocutor would argue occurs in the best case (in likelihood, magnitude, and relative importance)?
(these last two are a little bit more specific—I get that you’re not trying to really write the bill right now, so if this is unfair to the spirit of your suggestion, no worries, feel free to ignore it)
your legislation proposal is highly specific and also socially nonstandard—the “18 is an adult” line is, of course, essentially arbitrary and there’s a reasonable case to make either way, but because 30 is very high (I’m not aware of any similarly-sweeping law that sets it above 21 in US at least), you do assume a burden of proof that, imo, is heavier than just stating that the frontal cortex stops developing at 25, then tack on five years (why?). Similarly for the remaining two criteria—to suggest that mental illnesses disqualify someone from being in a romantic relationship I think clearly requires some qualification / justification—it may or may not be the case that one might be a danger to one’s partner, but again, what is the comparative likelihood? What justifies legal intervention when the partner could presumably leave or not according to their judgement? How harmful would this be to someone wrongly diagnosed? (to be clear, I interpreted you as arguing that some people should be prevented from having partners, and only then should AI partners be made available to people—it’s ambiguous to me if that was the case you were trying to make, or if restringting peoples’ partners was a ground assumption that you assumed—if the latter, it’s not clear why)
for what its worth, your comment on porn/onlyfans, etc. doesn’t actually stop the whataboutism—you compare AI romance to things nominally in the same reference class, assert that they’re bad too, and assert that it has no bearing on your argument that these things are not restricted in the same way. It’s fine to bite the bullet and say “I stand by my reasoning above, these should be banned too”, but you should argue that that is true explicitly if that’s what you think. It is also not a broadly accepted fact that online-dating/porn/onlyfans/etc. constitute a net harm; a well-fleshed out argument above could lead someone to conclude this, but it weakens your argument to merely assert it in the teeth of plausible objections.
Thanks for reading, hope it was respectful and productive :)
your legislation proposal is highly specific and also socially nonstandard—the “18 is an adult” line is, of course, essentially arbitrary and there’s a reasonable case to make either way, but because 30 is very high (I’m not aware of any similarly-sweeping law that sets it above 21 in US at least), you do assume a burden of proof that, imo, is heavier than just stating that the frontal cortex stops developing at 25, then tack on five years (why?).
First of all, as our society and civilisation gets more complex, “18 is an adult” is more and more comically low and inadequate.
Second, I think a better reference class are decisions that may have irreversible consequences. E.g., the minimum age of voluntary human sterilisation is 25, 35, and even 40 years in some countries (but is apparently just 18 in the US, which is a joke).
I cannot easily find statistics of the minimum age when a single person can adopt a child, but it appears to be 30 years in the US. If the rationale behind this policy was about financial stability only, why rich, single 25 yo’s cannot adopt?
I think it’s better to compare entering AI relationship with these policies than with drinking alcohol or watching porn or having sex with humans (individual cases of which, for the most part, don’t change human lives irreversibly, if practiced safely; and yes, it would be prudent to ban unprotected sex for unmarried people under 25, but alas, such a policy would be unenforceable).
Similarly for the remaining two criteria—to suggest that mental illnesses disqualify someone from being in a romantic relationship I think clearly requires some qualification / justification—it may or may not be the case that one might be a danger to one’s partner, but again, what is the comparative likelihood?
I don’t think any mental condition disqualifies person from having a human relationship, but I think it shifts the balance in the other direction. E.g., if a person has bouts of uncontrollable aggression and has a history of domestic abuse and violence, it makes much less sense to bar him from AI partners and thus compel him to find new potential human victims (although he/she is not prohibited from doing that, unless jailed).
I interpreted you as arguing that some people should be prevented from having partners, and only then should AI partners be made available to people
No, this is not what I meant, see above.
for what its worth, your comment on porn/onlyfans, etc. doesn’t actually stop the whataboutism—you compare AI romance to things nominally in the same reference class, assert that they’re bad too, and assert that it has no bearing on your argument that these things are not restricted in the same way. It’s fine to bite the bullet and say “I stand by my reasoning above, these should be banned too”, but you should argue that that is true explicitly if that’s what you think. It is also not a broadly accepted fact that online-dating/porn/onlyfans/etc. constitute a net harm; a well-fleshed out argument above could lead someone to conclude this, but it weakens your argument to merely assert it in the teeth of plausible objections.
All these things are at least mildly bad for society, I think this is very uncontroversial. What is much more doubtful (including for me) is how the effects of these things on individual weigh against their effects on society. The balance may be different for different things and is also different than the respective balance for AI partners.
First, the discussion of the ban of porn is unproductive because it’s completely unenforceable.
Online dating is a very complicated matter and I don’t want to discuss it here, or anywhere really, especially to “justify” my position about AI partners. There are lots of people on this issue already. But what I would say for sure that the design of the currently dominant online dating systems such as Tinder is very suboptimal, just as the design of the presently dominating social media platforms. There could be healthier designs for online dating systems both for individuals and society than Tinder, but Tinder won because swiping itself is addictive (I tell from first-hand experience here; I’m addicted to swiping on Tinder).
OnlyFans I think is just cancer and should be shut down, I think (e.g., see here; although this particular screen is from Twitch and not OnlyFans, I think OnlyFans is full of this shit, too). It doesn’t make individual lives better any more substantially than porn, but has more negative effects on society.
Note that I also don’t suggest complete ban of AI partners; but to mostly restrict it for under-30′s.
Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities).
I thought this is such a table stakes in EA/LessWrong circles it’s not worth justifying. Will MacAskill in “What We Owe The Future” have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials. Regardless, even if you don’t agree with these conclusions, you can think of this as the assumption that I’m taking in the post (indeed, if we don’t care about future unborn lives, why care about the society at all?)
I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people—if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made.
Sorry, I don’t understand what “denial of association to certain people” you are talking about here? Or you take “AI partners” as kinds of “people”, as in Mitchell Porter’s comment?
If this restriction is ultimately an acceptable cost, why does the state—in particular—have the right / obligation to enforce this, as opposed to leaving it to individual good judgement / families / community / etc?
This is an extremely general regulation and political science question, not specific to the case of AI partners at all. Why don’t we defer to individual good judgement/families/community all other questions, such as buying alcohol before 18 (or 21), buying and taking drugs or any other (unchecked) substances, adopting children, etc.? I think fundamentally this is because individuals (as well as their families and communities) often cannot have a good judgement, at least until a certain age. Entering a real loving relationship with an AI is a serious decision that can transform the person and their entire future life trajectory, and I think 18 yo’s don’t nearly have the capacity and knowledge to make such a decision rationally and consciously.
what about the former incel who is now happily married to his AI gf?
By “incel” you mean a particular subculture, or all people who are failing to find any intimacy in a long time, which is, by the way, 1/3rd of all young men in America? The “young man” from my story belongs to this wider group. Regarding this wider group, my response would be: life with just porn (but no AI partners) is not that bad that we need to rush AI partners in, and a lot of these people will find satisfying human relationships before they are 30. If they embark on AI partnerships, however, I’m afraid they could be “locked in” there and never find satisfying human relationships afterwards.
Of the polycouple that has an AI just because they think its neat?
I didn’t think about this case. Off the cuff it sounds OK to me, yes.
You should be more comparative: compare your primary impacts with other possible concerns in terms of magnitude, group-of-incidence (i.e. is it harming groups to whom society has incurred some special obligation to protect?), and severity.
Didn’t quite understand what do you mean here.
You also need a stronger support for some assertions on which your case hinges: why should it be so much more likely that an AI is going to fall far short of a human partner? If that’s not true, how important is this compared to your points on fertility?
I think it fall short for some people and not others (see this comment). I don’t know what the relative prevalence will be. But anyway, I think it’s relatively less important to the fertility point. Life without AI partner is not really suffering, after all, for most people (unless they are really depressed, feel completely unloved, worthless, etc., which I made reservations about). Incidentally, I don’t that most people could make a decision to be child-free in full consciousness before they are 30, and found it surprising that the minimum age of vasectomy is 18 (in United States). But after they are 30, I think people could have full freedom to decide they are going to be child-free and live in a relationship with AI partner happily thereafter.
How should these benefits be compared to what a contralocutor would argue occurs in the best case (in likelihood, magnitude, and relative importance)?
Answering this rationally hinges on “solving ethics”, which nobody has done, neither me nor my contralocutors (and is likely not possible in principle, if ethics are subjective and constructed all the way down). So ultimately, this will based on vibes-based intuitions about the relative importance of the society and the individual, which I (and my contralocutors) will then find persuasive ways to justify. But this is not the matter of rationality yet, this is a matter of politics, ultimately.
I thought this is such a table stakes in EA/LessWrong circles it’s not worth justifying. Will MacAskill in “What We Owe The Future” have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials.
I think you’re mistaken about what’s considered table stakes on LW. We don’t make such detailed assumptions about the values of people here. Maybe the EA forum is different? On LW, newcomers are generally pointed to the sequences, which is much more about epistemology than population ethics.
In any case, it’s somewhat difficult to square your stated values with the policy you endorse. In the long run, the limiting factor on the number of people that can live is the fact that our universe has a limited quantity of resources. The number of people willing to bear and raise children in “western countries” in the early 21st century is not the bottleneck. Even if we could double the population overnight, the number of people ever to live in the history of the universe would be the about the same, since it depends mostly on the amount of thermodynamic free energy contained in the regions of space we can reach.
It would certainly be bad if humanity dies out or our civilization crumbles because we produced too few offspring. But fertility in many parts of the world is still quite high, so that seems unlikely. While we still might like to make it easier and more enjoyable for people to have children, it seems backwards to try and induce people to have children by banning things they might substitute for it. It’s not going to change the number of unborn future people.
Please read MacAskill or someone else on this topic. They argue for more people in Western countries and in this century not for galaxy-brained reasons but rather mundane reasons, that have little to do with their overall long-termism. Roughly, for them, it seems that having more people in Western countries this century lowers the risk of the “great stagnation”.
Also, if long-termism is wrong, but sentientism is still right, and we are not going to over-live AGI (but not too soon, but let’s say in 100 years), it’s good to produce more happy-ish sentient observers why we are here and AGI hasn’t yet over-taken the planet.
But fertility in many parts of the world is still quite high, so that seems unlikely.
Fertility rate drops across the globe rapidly. If Africa is lifted out of poverty and insufficient education through some near-term AI advances, we may see a really rapid and precipitous decline in population. Elon Musk actually worries quite a lot about this risk and advocates everyone to have more kids (he himself has 10).
Tl;dr is that your argument doesn’t meaningfully engage the counterproposition, and I think this not only harms your argument, but severely limits the extent to which the discussion in the comments can be productive. I’ll confess that the wall of text below was written because you made me angry, not because I’m so invested in epistemic virtue—that said, I hope it will be taken as constructive criticism which will help the comments-section be more valuable for discussion :)
Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities). Your proposal also requires a metric of justification (i.e. “X is good” is typically insufficient to entail “the government should do X”—more is needed ). I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people—if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made. If this restriction is ultimately an acceptable cost, why does the state—in particular—have the right / obligation to enforce this, as opposed to leaving it to individual good judgement / families / community / etc? (though you address this in some of the comments, you really only address the “worst-case” scenario; see my next remark)
You need more nuanced characterizations, and a comparison of the different types of outcomes you could expect to see. While you take a mainline case with respect to technology, your characterization of “who is seeking AI romantic partners and why” is, with respect, very caricatured—there’s no real engagement with the cases your contralocutor would bring up—what about the former incel who is now happily married to his AI gf? Of the polycule that has an AI just because they think its neat? Is this extremely unlikely in your reasoning? Relatively unimportant? More engagement with the cases inconvenient to your argument would make it much stronger (and incidentally tends to make it sound more considerate).
You should be more comparative: compare your primary impacts with other possible concerns in terms of magnitude, group-of-incidence (i.e. is it harming groups to whom society has incurred some special obligation to protect?), and severity. You also need a stronger support for some assertions on which your case hinges: why should it be so much more likely that an AI is going to fall far short of a human partner? If that’s not true, how important is this compared to your points on fertility? How should these benefits be compared to what a contralocutor would argue occurs in the best case (in likelihood, magnitude, and relative importance)?
(these last two are a little bit more specific—I get that you’re not trying to really write the bill right now, so if this is unfair to the spirit of your suggestion, no worries, feel free to ignore it)
your legislation proposal is highly specific and also socially nonstandard—the “18 is an adult” line is, of course, essentially arbitrary and there’s a reasonable case to make either way, but because 30 is very high (I’m not aware of any similarly-sweeping law that sets it above 21 in US at least), you do assume a burden of proof that, imo, is heavier than just stating that the frontal cortex stops developing at 25, then tack on five years (why?). Similarly for the remaining two criteria—to suggest that mental illnesses disqualify someone from being in a romantic relationship I think clearly requires some qualification / justification—it may or may not be the case that one might be a danger to one’s partner, but again, what is the comparative likelihood? What justifies legal intervention when the partner could presumably leave or not according to their judgement? How harmful would this be to someone wrongly diagnosed? (to be clear, I interpreted you as arguing that some people should be prevented from having partners, and only then should AI partners be made available to people—it’s ambiguous to me if that was the case you were trying to make, or if restringting peoples’ partners was a ground assumption that you assumed—if the latter, it’s not clear why)
for what its worth, your comment on porn/onlyfans, etc. doesn’t actually stop the whataboutism—you compare AI romance to things nominally in the same reference class, assert that they’re bad too, and assert that it has no bearing on your argument that these things are not restricted in the same way. It’s fine to bite the bullet and say “I stand by my reasoning above, these should be banned too”, but you should argue that that is true explicitly if that’s what you think. It is also not a broadly accepted fact that online-dating/porn/onlyfans/etc. constitute a net harm; a well-fleshed out argument above could lead someone to conclude this, but it weakens your argument to merely assert it in the teeth of plausible objections.
Thanks for reading, hope it was respectful and productive :)
First of all, as our society and civilisation gets more complex, “18 is an adult” is more and more comically low and inadequate.
Second, I think a better reference class are decisions that may have irreversible consequences. E.g., the minimum age of voluntary human sterilisation is 25, 35, and even 40 years in some countries (but is apparently just 18 in the US, which is a joke).
I cannot easily find statistics of the minimum age when a single person can adopt a child, but it appears to be 30 years in the US. If the rationale behind this policy was about financial stability only, why rich, single 25 yo’s cannot adopt?
I think it’s better to compare entering AI relationship with these policies than with drinking alcohol or watching porn or having sex with humans (individual cases of which, for the most part, don’t change human lives irreversibly, if practiced safely; and yes, it would be prudent to ban unprotected sex for unmarried people under 25, but alas, such a policy would be unenforceable).
I don’t think any mental condition disqualifies person from having a human relationship, but I think it shifts the balance in the other direction. E.g., if a person has bouts of uncontrollable aggression and has a history of domestic abuse and violence, it makes much less sense to bar him from AI partners and thus compel him to find new potential human victims (although he/she is not prohibited from doing that, unless jailed).
No, this is not what I meant, see above.
All these things are at least mildly bad for society, I think this is very uncontroversial. What is much more doubtful (including for me) is how the effects of these things on individual weigh against their effects on society. The balance may be different for different things and is also different than the respective balance for AI partners.
First, the discussion of the ban of porn is unproductive because it’s completely unenforceable.
Online dating is a very complicated matter and I don’t want to discuss it here, or anywhere really, especially to “justify” my position about AI partners. There are lots of people on this issue already. But what I would say for sure that the design of the currently dominant online dating systems such as Tinder is very suboptimal, just as the design of the presently dominating social media platforms. There could be healthier designs for online dating systems both for individuals and society than Tinder, but Tinder won because swiping itself is addictive (I tell from first-hand experience here; I’m addicted to swiping on Tinder).
OnlyFans I think is just cancer and should be shut down, I think (e.g., see here; although this particular screen is from Twitch and not OnlyFans, I think OnlyFans is full of this shit, too). It doesn’t make individual lives better any more substantially than porn, but has more negative effects on society.
Note that I also don’t suggest complete ban of AI partners; but to mostly restrict it for under-30′s.
I thought this is such a table stakes in EA/LessWrong circles it’s not worth justifying. Will MacAskill in “What We Owe The Future” have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials. Regardless, even if you don’t agree with these conclusions, you can think of this as the assumption that I’m taking in the post (indeed, if we don’t care about future unborn lives, why care about the society at all?)
Sorry, I don’t understand what “denial of association to certain people” you are talking about here? Or you take “AI partners” as kinds of “people”, as in Mitchell Porter’s comment?
This is an extremely general regulation and political science question, not specific to the case of AI partners at all. Why don’t we defer to individual good judgement/families/community all other questions, such as buying alcohol before 18 (or 21), buying and taking drugs or any other (unchecked) substances, adopting children, etc.? I think fundamentally this is because individuals (as well as their families and communities) often cannot have a good judgement, at least until a certain age. Entering a real loving relationship with an AI is a serious decision that can transform the person and their entire future life trajectory, and I think 18 yo’s don’t nearly have the capacity and knowledge to make such a decision rationally and consciously.
By “incel” you mean a particular subculture, or all people who are failing to find any intimacy in a long time, which is, by the way, 1/3rd of all young men in America? The “young man” from my story belongs to this wider group. Regarding this wider group, my response would be: life with just porn (but no AI partners) is not that bad that we need to rush AI partners in, and a lot of these people will find satisfying human relationships before they are 30. If they embark on AI partnerships, however, I’m afraid they could be “locked in” there and never find satisfying human relationships afterwards.
I didn’t think about this case. Off the cuff it sounds OK to me, yes.
Didn’t quite understand what do you mean here.
I think it fall short for some people and not others (see this comment). I don’t know what the relative prevalence will be. But anyway, I think it’s relatively less important to the fertility point. Life without AI partner is not really suffering, after all, for most people (unless they are really depressed, feel completely unloved, worthless, etc., which I made reservations about). Incidentally, I don’t that most people could make a decision to be child-free in full consciousness before they are 30, and found it surprising that the minimum age of vasectomy is 18 (in United States). But after they are 30, I think people could have full freedom to decide they are going to be child-free and live in a relationship with AI partner happily thereafter.
Answering this rationally hinges on “solving ethics”, which nobody has done, neither me nor my contralocutors (and is likely not possible in principle, if ethics are subjective and constructed all the way down). So ultimately, this will based on vibes-based intuitions about the relative importance of the society and the individual, which I (and my contralocutors) will then find persuasive ways to justify. But this is not the matter of rationality yet, this is a matter of politics, ultimately.
I think you’re mistaken about what’s considered table stakes on LW. We don’t make such detailed assumptions about the values of people here. Maybe the EA forum is different? On LW, newcomers are generally pointed to the sequences, which is much more about epistemology than population ethics.
In any case, it’s somewhat difficult to square your stated values with the policy you endorse. In the long run, the limiting factor on the number of people that can live is the fact that our universe has a limited quantity of resources. The number of people willing to bear and raise children in “western countries” in the early 21st century is not the bottleneck. Even if we could double the population overnight, the number of people ever to live in the history of the universe would be the about the same, since it depends mostly on the amount of thermodynamic free energy contained in the regions of space we can reach.
It would certainly be bad if humanity dies out or our civilization crumbles because we produced too few offspring. But fertility in many parts of the world is still quite high, so that seems unlikely. While we still might like to make it easier and more enjoyable for people to have children, it seems backwards to try and induce people to have children by banning things they might substitute for it. It’s not going to change the number of unborn future people.
Please read MacAskill or someone else on this topic. They argue for more people in Western countries and in this century not for galaxy-brained reasons but rather mundane reasons, that have little to do with their overall long-termism. Roughly, for them, it seems that having more people in Western countries this century lowers the risk of the “great stagnation”.
Also, if long-termism is wrong, but sentientism is still right, and we are not going to over-live AGI (but not too soon, but let’s say in 100 years), it’s good to produce more happy-ish sentient observers why we are here and AGI hasn’t yet over-taken the planet.
Fertility rate drops across the globe rapidly. If Africa is lifted out of poverty and insufficient education through some near-term AI advances, we may see a really rapid and precipitous decline in population. Elon Musk actually worries quite a lot about this risk and advocates everyone to have more kids (he himself has 10).