Missing argument pieces: you lack an argument for why higher fertility rates are good, but perhaps more importantly, to whom such benefits accrue (ie how much of the alleged benefit is spillover/externalities).
I thought this is such a table stakes in EA/LessWrong circles it’s not worth justifying. Will MacAskill in “What We Owe The Future” have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials. Regardless, even if you don’t agree with these conclusions, you can think of this as the assumption that I’m taking in the post (indeed, if we don’t care about future unborn lives, why care about the society at all?)
I think you engage this somewhat when you discuss rationale for the severity of the law, but your proposal would require the deliberate denial of a right of free association to certain people—if you think this is okay, you should explicitly state the criterion of severity by which such a decision should be made.
Sorry, I don’t understand what “denial of association to certain people” you are talking about here? Or you take “AI partners” as kinds of “people”, as in Mitchell Porter’s comment?
If this restriction is ultimately an acceptable cost, why does the state—in particular—have the right / obligation to enforce this, as opposed to leaving it to individual good judgement / families / community / etc?
This is an extremely general regulation and political science question, not specific to the case of AI partners at all. Why don’t we defer to individual good judgement/families/community all other questions, such as buying alcohol before 18 (or 21), buying and taking drugs or any other (unchecked) substances, adopting children, etc.? I think fundamentally this is because individuals (as well as their families and communities) often cannot have a good judgement, at least until a certain age. Entering a real loving relationship with an AI is a serious decision that can transform the person and their entire future life trajectory, and I think 18 yo’s don’t nearly have the capacity and knowledge to make such a decision rationally and consciously.
what about the former incel who is now happily married to his AI gf?
By “incel” you mean a particular subculture, or all people who are failing to find any intimacy in a long time, which is, by the way, 1/3rd of all young men in America? The “young man” from my story belongs to this wider group. Regarding this wider group, my response would be: life with just porn (but no AI partners) is not that bad that we need to rush AI partners in, and a lot of these people will find satisfying human relationships before they are 30. If they embark on AI partnerships, however, I’m afraid they could be “locked in” there and never find satisfying human relationships afterwards.
Of the polycouple that has an AI just because they think its neat?
I didn’t think about this case. Off the cuff it sounds OK to me, yes.
You should be more comparative: compare your primary impacts with other possible concerns in terms of magnitude, group-of-incidence (i.e. is it harming groups to whom society has incurred some special obligation to protect?), and severity.
Didn’t quite understand what do you mean here.
You also need a stronger support for some assertions on which your case hinges: why should it be so much more likely that an AI is going to fall far short of a human partner? If that’s not true, how important is this compared to your points on fertility?
I think it fall short for some people and not others (see this comment). I don’t know what the relative prevalence will be. But anyway, I think it’s relatively less important to the fertility point. Life without AI partner is not really suffering, after all, for most people (unless they are really depressed, feel completely unloved, worthless, etc., which I made reservations about). Incidentally, I don’t that most people could make a decision to be child-free in full consciousness before they are 30, and found it surprising that the minimum age of vasectomy is 18 (in United States). But after they are 30, I think people could have full freedom to decide they are going to be child-free and live in a relationship with AI partner happily thereafter.
How should these benefits be compared to what a contralocutor would argue occurs in the best case (in likelihood, magnitude, and relative importance)?
Answering this rationally hinges on “solving ethics”, which nobody has done, neither me nor my contralocutors (and is likely not possible in principle, if ethics are subjective and constructed all the way down). So ultimately, this will based on vibes-based intuitions about the relative importance of the society and the individual, which I (and my contralocutors) will then find persuasive ways to justify. But this is not the matter of rationality yet, this is a matter of politics, ultimately.
I thought this is such a table stakes in EA/LessWrong circles it’s not worth justifying. Will MacAskill in “What We Owe The Future” have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials.
I think you’re mistaken about what’s considered table stakes on LW. We don’t make such detailed assumptions about the values of people here. Maybe the EA forum is different? On LW, newcomers are generally pointed to the sequences, which is much more about epistemology than population ethics.
In any case, it’s somewhat difficult to square your stated values with the policy you endorse. In the long run, the limiting factor on the number of people that can live is the fact that our universe has a limited quantity of resources. The number of people willing to bear and raise children in “western countries” in the early 21st century is not the bottleneck. Even if we could double the population overnight, the number of people ever to live in the history of the universe would be the about the same, since it depends mostly on the amount of thermodynamic free energy contained in the regions of space we can reach.
It would certainly be bad if humanity dies out or our civilization crumbles because we produced too few offspring. But fertility in many parts of the world is still quite high, so that seems unlikely. While we still might like to make it easier and more enjoyable for people to have children, it seems backwards to try and induce people to have children by banning things they might substitute for it. It’s not going to change the number of unborn future people.
Please read MacAskill or someone else on this topic. They argue for more people in Western countries and in this century not for galaxy-brained reasons but rather mundane reasons, that have little to do with their overall long-termism. Roughly, for them, it seems that having more people in Western countries this century lowers the risk of the “great stagnation”.
Also, if long-termism is wrong, but sentientism is still right, and we are not going to over-live AGI (but not too soon, but let’s say in 100 years), it’s good to produce more happy-ish sentient observers why we are here and AGI hasn’t yet over-taken the planet.
But fertility in many parts of the world is still quite high, so that seems unlikely.
Fertility rate drops across the globe rapidly. If Africa is lifted out of poverty and insufficient education through some near-term AI advances, we may see a really rapid and precipitous decline in population. Elon Musk actually worries quite a lot about this risk and advocates everyone to have more kids (he himself has 10).
I thought this is such a table stakes in EA/LessWrong circles it’s not worth justifying. Will MacAskill in “What We Owe The Future” have spent many pages arguing for why more people is good and procreation is good. I assumed that most readers of the post either have read this book or have absorbed this positions through other materials. Regardless, even if you don’t agree with these conclusions, you can think of this as the assumption that I’m taking in the post (indeed, if we don’t care about future unborn lives, why care about the society at all?)
Sorry, I don’t understand what “denial of association to certain people” you are talking about here? Or you take “AI partners” as kinds of “people”, as in Mitchell Porter’s comment?
This is an extremely general regulation and political science question, not specific to the case of AI partners at all. Why don’t we defer to individual good judgement/families/community all other questions, such as buying alcohol before 18 (or 21), buying and taking drugs or any other (unchecked) substances, adopting children, etc.? I think fundamentally this is because individuals (as well as their families and communities) often cannot have a good judgement, at least until a certain age. Entering a real loving relationship with an AI is a serious decision that can transform the person and their entire future life trajectory, and I think 18 yo’s don’t nearly have the capacity and knowledge to make such a decision rationally and consciously.
By “incel” you mean a particular subculture, or all people who are failing to find any intimacy in a long time, which is, by the way, 1/3rd of all young men in America? The “young man” from my story belongs to this wider group. Regarding this wider group, my response would be: life with just porn (but no AI partners) is not that bad that we need to rush AI partners in, and a lot of these people will find satisfying human relationships before they are 30. If they embark on AI partnerships, however, I’m afraid they could be “locked in” there and never find satisfying human relationships afterwards.
I didn’t think about this case. Off the cuff it sounds OK to me, yes.
Didn’t quite understand what do you mean here.
I think it fall short for some people and not others (see this comment). I don’t know what the relative prevalence will be. But anyway, I think it’s relatively less important to the fertility point. Life without AI partner is not really suffering, after all, for most people (unless they are really depressed, feel completely unloved, worthless, etc., which I made reservations about). Incidentally, I don’t that most people could make a decision to be child-free in full consciousness before they are 30, and found it surprising that the minimum age of vasectomy is 18 (in United States). But after they are 30, I think people could have full freedom to decide they are going to be child-free and live in a relationship with AI partner happily thereafter.
Answering this rationally hinges on “solving ethics”, which nobody has done, neither me nor my contralocutors (and is likely not possible in principle, if ethics are subjective and constructed all the way down). So ultimately, this will based on vibes-based intuitions about the relative importance of the society and the individual, which I (and my contralocutors) will then find persuasive ways to justify. But this is not the matter of rationality yet, this is a matter of politics, ultimately.
I think you’re mistaken about what’s considered table stakes on LW. We don’t make such detailed assumptions about the values of people here. Maybe the EA forum is different? On LW, newcomers are generally pointed to the sequences, which is much more about epistemology than population ethics.
In any case, it’s somewhat difficult to square your stated values with the policy you endorse. In the long run, the limiting factor on the number of people that can live is the fact that our universe has a limited quantity of resources. The number of people willing to bear and raise children in “western countries” in the early 21st century is not the bottleneck. Even if we could double the population overnight, the number of people ever to live in the history of the universe would be the about the same, since it depends mostly on the amount of thermodynamic free energy contained in the regions of space we can reach.
It would certainly be bad if humanity dies out or our civilization crumbles because we produced too few offspring. But fertility in many parts of the world is still quite high, so that seems unlikely. While we still might like to make it easier and more enjoyable for people to have children, it seems backwards to try and induce people to have children by banning things they might substitute for it. It’s not going to change the number of unborn future people.
Please read MacAskill or someone else on this topic. They argue for more people in Western countries and in this century not for galaxy-brained reasons but rather mundane reasons, that have little to do with their overall long-termism. Roughly, for them, it seems that having more people in Western countries this century lowers the risk of the “great stagnation”.
Also, if long-termism is wrong, but sentientism is still right, and we are not going to over-live AGI (but not too soon, but let’s say in 100 years), it’s good to produce more happy-ish sentient observers why we are here and AGI hasn’t yet over-taken the planet.
Fertility rate drops across the globe rapidly. If Africa is lifted out of poverty and insufficient education through some near-term AI advances, we may see a really rapid and precipitous decline in population. Elon Musk actually worries quite a lot about this risk and advocates everyone to have more kids (he himself has 10).