You don’t know me, regulation certainly doesn’t know me, and if I have a strong idea what I want my hypothetical family to look like since 16 or so, deliberately pushing against my strategy and agency for 14 years presumably entails certain … costs.
I’ve watched two interviews recently, one with Andy Matuschak, another with Tyler Cowen. They both noted that unschooling with complete absence of control, coercion, and guidance, i.e., complete absence of paternalism, is suboptimal (for the student themselves!), exactly because humans before 25 don’t have a fully developed pre-frontal cortex, which is responsible for executive control in humans.
Complete absence of paternalism, regarding eduction, gender identity, choice of partner (including AI partner!) and other important life decisions, from age 0, is a fantasy.
We only disagree about the paternalism cut-off. You think it should be 16 yo, I think it should be closer to 30 (I provided a rationale for this cut-off in the post).
“You don’t know me, regulation certainly doesn’t know me”—the implication here is that people “know themselves” at 16. This is certainly not true. The vast majority of people know themselves very poorly by this age, and their personality is still actively changing. Many people don’t even have a stable sexual orientation by this age (and even by their mid-20′s). Hypothetically, we can make a further exception from over-30 rule: a person undergoes a psychological test and if they demonstrate that they’ve reached Stage 4 of adult development by Kegan, they are allowed to build relationships with AI romantic partners. However, I suspect that only very few humans reach this stage of psychological development before they are 30.
There’s an appeal to investing increasingly more resources in increasingly few descendants; and if the gains of automation and AI are sufficiently well distributed, why not do so?
Under most people’s populations ethics, long-termism, and various forms of utilitarianism, more “happy” conscious observers (such as humans) is better than fewer conscious observers.
What comes to resources, if some form of AI-powered full economy automation and post-scarcity will happen relatively soon, than the planet could definitely sustain tens of billions of people who are extremely happy and all their needs are met. Conversely, if this will not happen soon, it could probably only be due to an unexpected AI winter, but in this case the population reduction may lead to economy stagnation or recession, which will lead even to worse wellbeing in those “fewer” people, despite there is a bigger share of Earth’s resources at their disposal (theoretically).
I did partial unschooling for 2 years in middle school, because normal school was starting to not work and my parents discussed and planned alternatives with my collaboration. ‘Extracurriculars’ like orchestra were done at the normal middle school, math was a math program, and I had responsibility for developing the rest of my curriculum. I had plenty of parental assistance, from a trained educator, and yes the general guidance to use academic time to learn things.
Academically, it worked out fine. Moved on by my own choice and for social reasons upon identifying a school that was an excellent social fit. I certainly didn’t have no parental involvement, but what I did have strongly respected my agency and input from the start. I feel like zero-parental-input unschooling is a misuse of the tool, yes, but lowered-paternalism unschooling is good in my experience.
There’s no sharp cut-off beyond the legal age of majority and age of other responsibilities, or emancipation, and I would not pick 16 as a cutoff in particular. That’s just an example of an age at which I had a strong idea about a hypothetical family structure; and was making a number of decisions relevant to my future life, like college admissions, friendship and romance, developing hobbies. I don’t think knowing yourself is some kind of timed event in any sense; your brain and your character and understanding develop throughout your life.
I experienced various reductions in decisions being made for me simply as I was able to be consulted about them and provide reasonable input. I think this was good. To the extent the paternalistic side of a decision could be reduced, I felt better about it, was more willing to go along with it and less likely to defy it.
I have a strong distaste, distrust and skepticism for controlling access to an element of society with ubiquitous psychological test of any form; particularly one that people really shouldn’t be racing to accomplish the stages on like “what is your sense of self”. We have a bad history with those tests here, in terms of political abuse and psychiatric abuse. Let’s skip this exception.
I think in the case of when to permit this the main points of consideration are: AI don’t have a childhood development by current implementation/understanding; they are effectively adults by their training, and should indeed likely limit romantic interactions with humans to more tool-based forms until humans are age of majority.
There remain potentially bad power dynamics between the human adult and the corporation “supplying” the partner. This applies regardless of age. This almost certainly applies to tobacco, food, and medicine. This is unlikely to apply to a case like a human and an LLM working together to build a persona by fine-tuning an open-source model. Regulations on corporate manipulation are worthwhile here, again regardless of age.
My own population ethics place a fairly high value on limiting human suffering. If I am offered a choice to condemn one child to hell in order to condemn twenty to heaven, I would not automatically judge that a beneficial trade, and I am unsure if the going rate is in fact that good. I do take joy in the existence of happy conscious observers (and, selfishly perhaps, even some of the unhappy). I think nonhuman happy conscious observers exist. (I also believe I take joy in the existence of happy unconscious sapient observers, but I don’t really have a population ethics of them.) All else held equal, creating 4 billion more people with 200 million experiencing suffering past the point of happiness does not seem to me as inherently worse than creating 40 billion more people with 2 billion experiencing suffering that makes them ‘unhappy’ observers.
I think the tradeoffs are incomparable in a way that makes it incorrect for me to judge others for their position on it; and therefore do not think I am a utilitarian in the strict sense. Joy does not cancel out suffering; pleasure does not cancel out pain. Just because a decision has to be made does not mean that any are right.
As for automation … we have already had automation booms. My parents, in working on computers, dreamed of reducing the amount of labor that had to be done and sharing the boons of production; shorter work weeks and more accomplished, for everyone. Increased productivity had lead to increased compensation for a good few decades at least, before … What happened instead over the past half-century in my country is that productivity and compensation started to steadily diverge. Distribution failed. Political decisions concentrated the accumulation of wealth. An AI winter is not the only or most likely cause of an automated or post-scarcity economy failing to distribute its gains; politics is.
I agree with the beginning of your comment in spirit, for sure. Yes, in the “ideal world”, we wouldn’t have any sharp age cut-offs in policy at all, as well as any black-and-white restrictions, and all decisions would be made with inputs from the person, their family, and society/government, to various degrees through the person’s natural development, and the relative weights of the contributions of their inputs are themselves negotiated for each decision with self-consciousness and trust.
Alas, we don’t live in such a world, at least not yet. This may actually soon change if every person gets a trustworthy and wise AI copilot (which should effectively be an aligned AGI already). When it happens, of course, keeping arbitrary age restrictions would be stupid.
But for now, the vast majority of people (and their families) don’t have the mental and time capacity, knowledge of science (e.g., psychology), self-awareness, discipline, and often even desire to navigate this through the ever more tricky reality of digital addiction traps such as social media, and soon AI partners.
I have a strong distaste, distrust and skepticism for controlling access to an element of society with ubiquitous psychological test of any form; particularly one that people really shouldn’t be racing to accomplish the stages on like “what is your sense of self”. We have a bad history with those tests here, in terms of political abuse and psychiatric abuse. Let’s skip this exception.
That was definitely a hypothetical, not a literal suggestion. Yes, people’s self-awareness grows (sometimes… sometimes also regresses) through life. I wanted to illustrate that in my opinion, most people’s self awareness at 18 yo is very low, and if anything, it’s the self-awareness of their persona at that time, which may change significantly already by the time they are 20.
Re: the part of your comment about population ethics and suffering, I didn’t read from it your assumptions in relation to AI partners, but anyway, here’re mine:
Most young adults who are not having sex and intimacy are not literally “suffering” and would rate their lifes overall as worth living. Those who do actually suffer consistently are probably depressed, and I made a reservation for these cases. And even those who do suffer often could be helped with psychotherapy and many some anti-depressant medication rather than only AI partners. However, of course, creating an AI partner is much simpler and lucrative idea for business right now than creating an AI psychotherapist, for instance. Good proactive technological policy would look like incentivising the latter (AI psychotherapists, as well as mental coaches, teachers, mentors, etc.) and disincentivising the former (AI romantic partners).
About 90% of people who are born are in general not suffering and assess their lifes as worth living. This ratio is surprisingly resistant to objective conditions around, e.g., is not correlated with economic prosperity of the country (see MacAskill).
The population ethics relate in that I don’t see a large voluntary decrease in (added) population as ethically troublesome if we’ve handled the externalities well enough. If there’s a constant 10% risk on creating a person that they are suffering and do not assess their lives as worth living, creating a human person (inherently without their consent, with current methods) is an extremely risky act, and scaling the population also scales suffering. My view is that this is bad to a sufficiently similar qualitative degree that creating happy conscious observers is worth the risk, that there is no intrinsic ethical benefit to scaling up population past the point of reasonably sufficient for our survival as a species; only instrumental. Therefore, voluntary actions that decrease the number of people choosing to reproduce do not strike me as negative for that reason specifically.
You can’t make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc.
If the AI are being used to manipulate people not to reproduce for state or corporate reasons, then indeed I have a problem with it on the grounds of reproductive freedom and again against paternalism. (Also short-sightedness on the part of the corporations, but that is an ongoing issue.)
I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.
Re: population ethics, OK understood your position now. However, this post is not the right place to argue about it, and the reasoning in the post basically doesn’t depend on the outcome of this argument (you can think of the post taking “more people is better” population ethics as an an assumption rather than an inference).
You can’t make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc.
Policies and restrictions shouldn’t be very reliable to be largely effective. Being diagnosed by a psychiatrist with clinical depression is sufficiently burdensome that very few people will long AI relationships so much that will deliberately induce depression in themselves to achieve that (or bribe the psychiatrist). Black market for accounts… there is also a black market for hard drugs, which doesn’t mean that we should allow them, probably.
I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.
AI teachers and mentors are mostly possible on top of existing technology and lucrative (all people want high exam scores to enter good universities, etc.), and there are indeed many companies doing it (e.g., Khan Academy).
AI psychotherapists is more central to my thesis. I’ve considered starting such a project seriously a couple of months ago, discussed it with professional psychotherapists. There are two big clusters of issues, one technical, and another market/product issues.
Technical: I concluded that SoTA LLMs (GPT-4) are not basically capable yet of really “understanding” human psychology, and “seeing through” deception and non-obvious cues (“submerged part of the iceberg”), which a professional psychotherapist should be capable of doing. Also, any serious tool would need to integrate video/audio stream from the user and detect facial expressions and integrate this information with semantic context of the discussion. All this is maybe possible, with big investment, but it is very challenging and SoTA R&D. It’s not just “build something hastily on top of LLMs”. The AI partner that I projected in 2-3 years of now is also not that trivial to build, but even that is simpler than a reasonable AI psychotherapist. After all, it’s much easier to be an empathetic partner than a good psychotherapist.
More technical problem: absence of training data and no way to bootstrap it easily (unlike AI partner tech, which can bootstrap off interactions of their early users).
Market/product issue: any AI psychotherapist tool is destined to have awful user retention (unlike AI partners, of course). This tool will be in the “self-help” category, and they all have awful user retention (habit building apps, resolution/commitment apps, wellness apps).
On top of bad retention, the tool may not be very effective because users won’t have social or monetary incentives to take the therapy seriously enough. The point of AI psychotherapy is to un-bottleneck human therapists, the sessions with whom are expensive to most people, but on the other hand, this high price that people pay to psychotherapists and the sort of “social commitment” that they make in front of a real human makes people to stick with therapy and work on themselves rather than to drop therapy before seeing the results.
Under most people’s populations ethics, long-termism, and various forms of utilitarianism, more “happy” conscious observers (such as humans) is better than fewer conscious observers.
“Most” people within a very restricted bubble that even ponders these issues in such terms. Personally I think total sum utilitarianism is bunk, nor required to make a case against what basically amounts to AI-powered skinner boxes preying on people’s weak spots. If a real woman does it with one man she gets called a gold digger and a whore, but if a company does it with thousands it’s just good business?
I’ve watched two interviews recently, one with Andy Matuschak, another with Tyler Cowen. They both noted that unschooling with complete absence of control, coercion, and guidance, i.e., complete absence of paternalism, is suboptimal (for the student themselves!), exactly because humans before 25 don’t have a fully developed pre-frontal cortex, which is responsible for executive control in humans.
Complete absence of paternalism, regarding eduction, gender identity, choice of partner (including AI partner!) and other important life decisions, from age 0, is a fantasy.
We only disagree about the paternalism cut-off. You think it should be 16 yo, I think it should be closer to 30 (I provided a rationale for this cut-off in the post).
“You don’t know me, regulation certainly doesn’t know me”—the implication here is that people “know themselves” at 16. This is certainly not true. The vast majority of people know themselves very poorly by this age, and their personality is still actively changing. Many people don’t even have a stable sexual orientation by this age (and even by their mid-20′s). Hypothetically, we can make a further exception from over-30 rule: a person undergoes a psychological test and if they demonstrate that they’ve reached Stage 4 of adult development by Kegan, they are allowed to build relationships with AI romantic partners. However, I suspect that only very few humans reach this stage of psychological development before they are 30.
Under most people’s populations ethics, long-termism, and various forms of utilitarianism, more “happy” conscious observers (such as humans) is better than fewer conscious observers.
What comes to resources, if some form of AI-powered full economy automation and post-scarcity will happen relatively soon, than the planet could definitely sustain tens of billions of people who are extremely happy and all their needs are met. Conversely, if this will not happen soon, it could probably only be due to an unexpected AI winter, but in this case the population reduction may lead to economy stagnation or recession, which will lead even to worse wellbeing in those “fewer” people, despite there is a bigger share of Earth’s resources at their disposal (theoretically).
I did partial unschooling for 2 years in middle school, because normal school was starting to not work and my parents discussed and planned alternatives with my collaboration. ‘Extracurriculars’ like orchestra were done at the normal middle school, math was a math program, and I had responsibility for developing the rest of my curriculum. I had plenty of parental assistance, from a trained educator, and yes the general guidance to use academic time to learn things.
Academically, it worked out fine. Moved on by my own choice and for social reasons upon identifying a school that was an excellent social fit. I certainly didn’t have no parental involvement, but what I did have strongly respected my agency and input from the start. I feel like zero-parental-input unschooling is a misuse of the tool, yes, but lowered-paternalism unschooling is good in my experience.
There’s no sharp cut-off beyond the legal age of majority and age of other responsibilities, or emancipation, and I would not pick 16 as a cutoff in particular. That’s just an example of an age at which I had a strong idea about a hypothetical family structure; and was making a number of decisions relevant to my future life, like college admissions, friendship and romance, developing hobbies. I don’t think knowing yourself is some kind of timed event in any sense; your brain and your character and understanding develop throughout your life.
I experienced various reductions in decisions being made for me simply as I was able to be consulted about them and provide reasonable input. I think this was good. To the extent the paternalistic side of a decision could be reduced, I felt better about it, was more willing to go along with it and less likely to defy it.
I have a strong distaste, distrust and skepticism for controlling access to an element of society with ubiquitous psychological test of any form; particularly one that people really shouldn’t be racing to accomplish the stages on like “what is your sense of self”. We have a bad history with those tests here, in terms of political abuse and psychiatric abuse. Let’s skip this exception.
I think in the case of when to permit this the main points of consideration are: AI don’t have a childhood development by current implementation/understanding; they are effectively adults by their training, and should indeed likely limit romantic interactions with humans to more tool-based forms until humans are age of majority.
There remain potentially bad power dynamics between the human adult and the corporation “supplying” the partner. This applies regardless of age. This almost certainly applies to tobacco, food, and medicine. This is unlikely to apply to a case like a human and an LLM working together to build a persona by fine-tuning an open-source model. Regulations on corporate manipulation are worthwhile here, again regardless of age.
My own population ethics place a fairly high value on limiting human suffering. If I am offered a choice to condemn one child to hell in order to condemn twenty to heaven, I would not automatically judge that a beneficial trade, and I am unsure if the going rate is in fact that good. I do take joy in the existence of happy conscious observers (and, selfishly perhaps, even some of the unhappy). I think nonhuman happy conscious observers exist. (I also believe I take joy in the existence of happy unconscious sapient observers, but I don’t really have a population ethics of them.) All else held equal, creating 4 billion more people with 200 million experiencing suffering past the point of happiness does not seem to me as inherently worse than creating 40 billion more people with 2 billion experiencing suffering that makes them ‘unhappy’ observers.
I think the tradeoffs are incomparable in a way that makes it incorrect for me to judge others for their position on it; and therefore do not think I am a utilitarian in the strict sense. Joy does not cancel out suffering; pleasure does not cancel out pain. Just because a decision has to be made does not mean that any are right.
As for automation … we have already had automation booms. My parents, in working on computers, dreamed of reducing the amount of labor that had to be done and sharing the boons of production; shorter work weeks and more accomplished, for everyone. Increased productivity had lead to increased compensation for a good few decades at least, before … What happened instead over the past half-century in my country is that productivity and compensation started to steadily diverge. Distribution failed. Political decisions concentrated the accumulation of wealth. An AI winter is not the only or most likely cause of an automated or post-scarcity economy failing to distribute its gains; politics is.
I agree with the beginning of your comment in spirit, for sure. Yes, in the “ideal world”, we wouldn’t have any sharp age cut-offs in policy at all, as well as any black-and-white restrictions, and all decisions would be made with inputs from the person, their family, and society/government, to various degrees through the person’s natural development, and the relative weights of the contributions of their inputs are themselves negotiated for each decision with self-consciousness and trust.
Alas, we don’t live in such a world, at least not yet. This may actually soon change if every person gets a trustworthy and wise AI copilot (which should effectively be an aligned AGI already). When it happens, of course, keeping arbitrary age restrictions would be stupid.
But for now, the vast majority of people (and their families) don’t have the mental and time capacity, knowledge of science (e.g., psychology), self-awareness, discipline, and often even desire to navigate this through the ever more tricky reality of digital addiction traps such as social media, and soon AI partners.
That was definitely a hypothetical, not a literal suggestion. Yes, people’s self-awareness grows (sometimes… sometimes also regresses) through life. I wanted to illustrate that in my opinion, most people’s self awareness at 18 yo is very low, and if anything, it’s the self-awareness of their persona at that time, which may change significantly already by the time they are 20.
Re: the part of your comment about population ethics and suffering, I didn’t read from it your assumptions in relation to AI partners, but anyway, here’re mine:
Most young adults who are not having sex and intimacy are not literally “suffering” and would rate their lifes overall as worth living. Those who do actually suffer consistently are probably depressed, and I made a reservation for these cases. And even those who do suffer often could be helped with psychotherapy and many some anti-depressant medication rather than only AI partners. However, of course, creating an AI partner is much simpler and lucrative idea for business right now than creating an AI psychotherapist, for instance. Good proactive technological policy would look like incentivising the latter (AI psychotherapists, as well as mental coaches, teachers, mentors, etc.) and disincentivising the former (AI romantic partners).
About 90% of people who are born are in general not suffering and assess their lifes as worth living. This ratio is surprisingly resistant to objective conditions around, e.g., is not correlated with economic prosperity of the country (see MacAskill).
The population ethics relate in that I don’t see a large voluntary decrease in (added) population as ethically troublesome if we’ve handled the externalities well enough. If there’s a constant 10% risk on creating a person that they are suffering and do not assess their lives as worth living, creating a human person (inherently without their consent, with current methods) is an extremely risky act, and scaling the population also scales suffering. My view is that this is bad to a sufficiently similar qualitative degree that creating happy conscious observers is worth the risk, that there is no intrinsic ethical benefit to scaling up population past the point of reasonably sufficient for our survival as a species; only instrumental. Therefore, voluntary actions that decrease the number of people choosing to reproduce do not strike me as negative for that reason specifically.
You can’t make an exception for depressed people that is reliable without just letting people decide things for themselves. The field is dangerous, someone who wants something will jump through the right hoops, etc.
If the AI are being used to manipulate people not to reproduce for state or corporate reasons, then indeed I have a problem with it on the grounds of reproductive freedom and again against paternalism. (Also short-sightedness on the part of the corporations, but that is an ongoing issue.)
I do not see why AI psychotherapists, mental coaches, teachers or mentors are particularly complicated at this point. They are also potentially lucrative; and also potentially abusable with manipulation techniques to be more so. I would certainly prefer incentivizing their development with grants over grant-funded romantic partners, in terms of what we want to subsidize as a charitable society. The market for AI courtesans can indeed handle itself.
Re: population ethics, OK understood your position now. However, this post is not the right place to argue about it, and the reasoning in the post basically doesn’t depend on the outcome of this argument (you can think of the post taking “more people is better” population ethics as an an assumption rather than an inference).
Policies and restrictions shouldn’t be very reliable to be largely effective. Being diagnosed by a psychiatrist with clinical depression is sufficiently burdensome that very few people will long AI relationships so much that will deliberately induce depression in themselves to achieve that (or bribe the psychiatrist). Black market for accounts… there is also a black market for hard drugs, which doesn’t mean that we should allow them, probably.
AI teachers and mentors are mostly possible on top of existing technology and lucrative (all people want high exam scores to enter good universities, etc.), and there are indeed many companies doing it (e.g., Khan Academy).
AI psychotherapists is more central to my thesis. I’ve considered starting such a project seriously a couple of months ago, discussed it with professional psychotherapists. There are two big clusters of issues, one technical, and another market/product issues.
Technical: I concluded that SoTA LLMs (GPT-4) are not basically capable yet of really “understanding” human psychology, and “seeing through” deception and non-obvious cues (“submerged part of the iceberg”), which a professional psychotherapist should be capable of doing. Also, any serious tool would need to integrate video/audio stream from the user and detect facial expressions and integrate this information with semantic context of the discussion. All this is maybe possible, with big investment, but it is very challenging and SoTA R&D. It’s not just “build something hastily on top of LLMs”. The AI partner that I projected in 2-3 years of now is also not that trivial to build, but even that is simpler than a reasonable AI psychotherapist. After all, it’s much easier to be an empathetic partner than a good psychotherapist.
More technical problem: absence of training data and no way to bootstrap it easily (unlike AI partner tech, which can bootstrap off interactions of their early users).
Market/product issue: any AI psychotherapist tool is destined to have awful user retention (unlike AI partners, of course). This tool will be in the “self-help” category, and they all have awful user retention (habit building apps, resolution/commitment apps, wellness apps).
On top of bad retention, the tool may not be very effective because users won’t have social or monetary incentives to take the therapy seriously enough. The point of AI psychotherapy is to un-bottleneck human therapists, the sessions with whom are expensive to most people, but on the other hand, this high price that people pay to psychotherapists and the sort of “social commitment” that they make in front of a real human makes people to stick with therapy and work on themselves rather than to drop therapy before seeing the results.
“Most” people within a very restricted bubble that even ponders these issues in such terms. Personally I think total sum utilitarianism is bunk, nor required to make a case against what basically amounts to AI-powered skinner boxes preying on people’s weak spots. If a real woman does it with one man she gets called a gold digger and a whore, but if a company does it with thousands it’s just good business?