I agree with all this, and this is all extremely sad. But this seems to be irrelevant to the question of AI partners: if there are other problems that depress fertility rate, it doesn’t mean that we shouldn’t deal with this upcoming problem. Moreover, while problems like from financial inequality (and precarity), reducing biological fertility of men and women due to stress and environmental pollution, etc., are big systemic problems that are very hard and very expensive to fix, it’s currently relatively very cheap to prevent further potential fertility drop resulting from widespread adoption of AI partners by under-30′s: just pass a regulation in major countries!
I don’t think I agree. That might be cheap financially, yes. But unless there’s a strong argument that AI partners cause harm to the humans using them, then I don’t think society has a sufficiently compelling reason to justify a ban. In particular, I don’t think (and I assume most agree?) that it’s a good idea to coerce people into having children they don’t want, so the relevant question for me is, can everyone who wants children have the number of children they want? And relatedly, will AI partners cause more people who want children to become unable to have them? From which the societal intervention should be, how do we help ensure that those who want children can have them? Maybe trying to address that still leads to consistent below-replacement fertility, in which case, sure, we should consider other paths. But we’re not actually doing that.
I think an adequate social and tech policy for the 21st century should
Recognise that needs/wants/desires/beliefs and new social constructs could be manufactured and to discuss this phenomenon explicitly, and
Deal with this social engineering consistently, either by really going out of the way to protect people’s agency and self-determination (today, people’s wants, needs, beliefs, and personalities are sculptured by different actors from when they are toddlers and start watching videos on iPads, and then only strengthens), or by allowing a “free market of influences”, but also participating in it, by subsidising the projects that will benefit the society itself.
USA seems to be much closer to the latter option, but when people discuss policy in the US, it’s conventional not to acknowledge (see The Elephant in The Brain) the real social engineering that is already perpetuated by both state and non-state actors (from the pledge of allegiance to church to Instagram to Coca-Cola), and to presume that social engineering done by the state itself is a taboo or at least a tool of last resort.
This is just not what is already happening: apart from the pledge of allegiance, there are also many other ways in which the state (and other state-adjacent institutions and structures) is (was) proactive to manufacture people’s beliefs or wants in a certain way, or to prevent people’s beliefs or wants to be manufactured in a certain way: the Red Scare, various forms of official and unofficial (yet institutionalised) censorship, and the regulation of nicotine marketing are a few examples that came first to my mind.
Now, assuming that personal relationships is a “sacred libertarian range” and avoiding the state to weigh any influence on how people’s wants and needs around personal relationships are formed (even if through the recommended school curriculum, which is albeit a very ineffective approach to social engineering), yet allowing any corporate actors (such as AI partner startups and online dating platforms) to shape these needs and even rewire the society in whatever way they please, is just an inconsistent and a self-defeating strategy for the society and therefore for the state, too.
The state should better realise that its strength rests not only on the overt patriotism/nationalism, military/law enforcement “national security”, and the economy, but on the health and the strength of the society, too.
P. S. All the above doesn’t mean that I really prefer the “second option”. The first option, that is, human agency being protected, seems much more beautiful and “truly liberal” to me. However, this vision is completely incompatible with the present-form capitalism (to start, it probably means that ads should probably be banned completely, the entire educational system completely changed, and the need for labour-to-earn-a-living resolved through AI and automation), so it doesn’t make much practical sense to discuss this option here.
I don’t think I agree. That might be cheap financially, yes. But unless there’s a strong argument that AI partners cause harm to the humans using them, then I don’t think society has a sufficiently compelling reason to justify a ban. In particular, I don’t think (and I assume most agree?) that it’s a good idea to coerce people into having children they don’t want, so the relevant question for me is, can everyone who wants children have the number of children they want? And relatedly, will AI partners cause more people who want children to become unable to have them? From which the societal intervention should be, how do we help ensure that those who want children can have them? Maybe trying to address that still leads to consistent below-replacement fertility, in which case, sure, we should consider other paths. But we’re not actually doing that.
I think an adequate social and tech policy for the 21st century should
Recognise that needs/wants/desires/beliefs and new social constructs could be manufactured and to discuss this phenomenon explicitly, and
Deal with this social engineering consistently, either by really going out of the way to protect people’s agency and self-determination (today, people’s wants, needs, beliefs, and personalities are sculptured by different actors from when they are toddlers and start watching videos on iPads, and then only strengthens), or by allowing a “free market of influences”, but also participating in it, by subsidising the projects that will benefit the society itself.
USA seems to be much closer to the latter option, but when people discuss policy in the US, it’s conventional not to acknowledge (see The Elephant in The Brain) the real social engineering that is already perpetuated by both state and non-state actors (from the pledge of allegiance to church to Instagram to Coca-Cola), and to presume that social engineering done by the state itself is a taboo or at least a tool of last resort.
This is just not what is already happening: apart from the pledge of allegiance, there are also many other ways in which the state (and other state-adjacent institutions and structures) is (was) proactive to manufacture people’s beliefs or wants in a certain way, or to prevent people’s beliefs or wants to be manufactured in a certain way: the Red Scare, various forms of official and unofficial (yet institutionalised) censorship, and the regulation of nicotine marketing are a few examples that came first to my mind.
Now, assuming that personal relationships is a “sacred libertarian range” and avoiding the state to weigh any influence on how people’s wants and needs around personal relationships are formed (even if through the recommended school curriculum, which is albeit a very ineffective approach to social engineering), yet allowing any corporate actors (such as AI partner startups and online dating platforms) to shape these needs and even rewire the society in whatever way they please, is just an inconsistent and a self-defeating strategy for the society and therefore for the state, too.
The state should better realise that its strength rests not only on the overt patriotism/nationalism, military/law enforcement “national security”, and the economy, but on the health and the strength of the society, too.
P. S. All the above doesn’t mean that I really prefer the “second option”. The first option, that is, human agency being protected, seems much more beautiful and “truly liberal” to me. However, this vision is completely incompatible with the present-form capitalism (to start, it probably means that ads should probably be banned completely, the entire educational system completely changed, and the need for labour-to-earn-a-living resolved through AI and automation), so it doesn’t make much practical sense to discuss this option here.