I think you should have a kid if you would have wanted one without recent AI progress. Timelines are still very uncertain, and strong AGI could still be decades away. Parenthood is strongly value creating and extremely rewarding (if hard at times) and that’s true in many many worlds.
In fact it’s hard to find probable worlds where having kids is a really bad idea, IMO. If we solve alignment and end up in AI utopia, having kids is great! If we don’t solve alignment and EY is right about what happens in a fast takeoff world, it doesn’t really matter if you have kids or not.
In that sense, it’s basically a freeroll, though of course there are intermediate outcomes. I don’t immediately see any strong argument in favor of not having kids if you would otherwise want them.
If we don’t solve alignment and EY is right about what happens in a fast takeoff world, it doesn’t really matter if you have kids or not.
This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don’t have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard “I really care about this person” one you could experience with just about anyone else.
I’m sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they’re very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.
Which makes lots of sense to me: they seem quite happy, and it’s not surprising they would be opposed to never getting to exist even if it isn’t a full lifetime.
I think the idea here was sort of “if the kid is unaware and death comes suddenly and swiftly they at least got a few years of life out of it”… cold as it sounds. But anyway this also assume the EY kind of FOOM scenario rather than one of the many others in which people are around, and the world just gets shittier and shittier.
It’s a pretty difficult topic to grasp with, especially given how much regret can come with not having had children in hindsight. Can’t say I have any answers for it. But it’s obviously not as simple as this answer makes it.
Yeah, but assuming your p(doom) isn’t really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don’t expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life—one that is better than it would be otherwise in part because she never has a job.
If your timelines are short-ish, you could likely have a child afterwards, because even if you’re a bit on the old side, hey, what, you don’t expect the ASI to find ways to improve health and fertility later in life?
I think the most important scenario to balance against is “nothing happens”, which is where you get shafted if you wait too long to have a child.
I don’t agree with that. I’m a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn’t do anything. If that’s the deal of life it’s a pretty good deal and I don’t think there’s any reason to be particularly anguished about it on your kid’s behalf.
Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)
Yes, I basically am not considering that because I am not aware of the arguments for why that’s a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.
I think the “stable totalitarianism” scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.
I mean this goes into the philosophical problem of whether it makes sense to compare utility of existent and virtual, non-existent agents but that would get long.
This argument works against any thing you could do besides AI work and thus has to be considered in that wider frame. Going to the gym does mean less time for AI go well. Building a house. Watching Netflix. Some of these are longer time investments and some shorter, but the question still remains. Answer the question first how much effort you want to invest into AI go well vs. all other things you can do and then consider the fraction for children.
Perhaps people who can’t contribute to AI alignment directly could help indirectly by providing free babysitting for the people working on AI alignment?
Not AI risk specifically. But I had lengthy discussions with a friend about the general question of whether it is ethical to have children. The concerns in our discussions were overpopulation and how bad the world is in general and in Germany in particular. These much weaker concerns compared to extinction were enough for him not to have children. He also mentioned The Voluntary Human Extinction Movement. We still disagree on this. Mostly we disagree on how had and failed the world is. I think it is not worse than it has been most of the time since forever. Maybe because I perceive less suffering (in myself and in others) than he does. We also disagree on how to deal with overpopulation. Whether to take local population into account. Whether to weigh by consumption. Whether to see this as an individual obligation, or a collective one. Or as an obligation at all. Still, we are good friends. Maybe that tells you something.
So you thought that overpopulation is not much of a concern and the world is not so bad, right? But if you had thought that overpopulation (or something else) was a really strong problem and also would have had very bad effects on your children (for example, if you had expected with a high probability that their life would have been a permanent postapocalyptic fight for food), would that have affected your decision?
Very likely. People have, at different times and places, reacted to perceived overpopulation with reduced fertility—if needed by infanticide (Robin Hanson discusses this sometimes, e.g., here). Though what exactly was considered “too many children” was probably very different each time. It wasn’t always “bad effects on children”. A “permanent postapocalyptic fight for food” doesn’t seem like the strongest argument. People have lived in that state for most of humanity.
Sorry, I don’t fully understand the answer. 1) You think that you would have reacted to perceived overpopulation by having less children, but 2) at the same time, you think that expecting your children to have to live in a permanent postapocalyptic fight for food is not such a strong argument because that was normal in former times. But if the consideration in point 2 would not have mattered, why would the perception of overpopulation have mattered to you?
First, the local overpopulation has a much higher influence than the global overpopulation. I think it is arguable to do something about Korea’s shrinking population even if you are worried that there are not enough resources globally. In general, what is true for the average is not true for everybody.
Second, just because I, as an individual, (feel that I) can’t afford to have children, e.g., because society imposes costs on me to have children, that doesn’t mean that I wish such children to not exist (it might, but it’s a different claim).
Maybe I am not clear enough or we are talking past each other. So forget about the overpopulation. Suppose you lived in a universe where in the year in which you decided to have your first child some omniscient demon appeared and told you that
with probability p, some disaster happens 10 years later which causes dying of starvation for everybody with certainty (with all the societal and psychological side effects that such a famine implies),
with probability 1-p life in your country remains forever as it was in that year (it’s hypothetical, so plesae do not question whether that is possible).
So my questions:
Would there be some p where you would have decided not to have children?
How would the quality / kind of the disaster affect the decision?
How would the time horizon (10 years in my example) affect the decision?
Are there other societal or other global conditions where you think people should not have children?
I reject the usefulness of the thought experiment. In practice, there is almost always a possibility to affect the outcome. And then the outcome is also almost never absolute (“starve with certainty”). And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children. Thus, I wonder what you want to gain from this thought experiment? What is your crux?
But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out. So aliens have sent us a message that they plan to do that and have proven that they have the technological means, and because of the means of transportation/communication we can’t send them a message back to convince them otherwise or something. And I guess I can work with the 1-p as it just means “stable comparable utility.”
What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children. But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it. Also, decision for children is a shared decision, so I additionally assume the partner would match this. But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.
First of all, thanks for the detailed answer. I do not fully understand your position here, but the clarity of the answer to the thought experiment was helpful.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given. With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that? Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
You ask what I want to gain from this thought experiment.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
However, the ethics of bringing children into the world seem to be touched only superficially. This seems strange to me for a community in which thinking about ethics and thinking about the future are seen as valuable. @Julia Wise, writing about “Raising children on the eve of AI” says: “This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.” At the same time, the median community member’s expectation about the future seems very gloomy to me (though there are also people who seem very excited about a future of mind uploading, turning the world into a holodeck, or whatever).
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
people (who have children) have not thought about this in detail,
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
So I tried to design a clear scenario to understand some parameters driving the decisions.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given.
I agree that, as an individual, one cannot affect most outcomes significantly. But if everybody assumes everybody does that too, then nobody does anything and thus definitely nothing happens/is done. Everybody contributes small parts, but those aggregate to change, because somebody will be at the right place at the right time to do something or ask the right question or bring the right people together etc. By ruling out the possibility you take this effect away and I have to price that into my model. If you or society wants to achieve something, you have to convice large numbers that change is possible and that it is important that everybody contributes. In management, that is called “building momentum.”
With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that?
You only added a binary probability between two options keeping both individually rigid. It would have worked better to provide distributions for number of people suffering or the effectiveness of influence etc. - but because I didn’t know you intention of the though experiment I couldn’t just assume those.
Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
No, I don’t want to make that specific implication. Maybe powerful people have a different interest in having children, but I don’t know those forces and would’t make a confident prediction either way.
But if I personally can’t influence results, I have to make assumptions as to why I can’t. Maybe I’m sick, maybe I’m legally limited in some way in your hypothetical. Such reasons would surely influence desire to have children.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
...
I think that there are many more reasons than this including the ecological footprint of a child, personal reasons, general ethical reasons, and others. But I agree that there is no coherent picture. The community hasn’t come to terms with this and this is more a market place of EA/LW flavioured ideas. What else do you expect of a young and preparadigmatic field. People try to think hard about it, but it is, well, hard.
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
More bad than good, I guess. But it is a distribution as you can look up on Metaculus.
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
Some will think it and be worried. That’s what the s-risk sub-community is about, but I get the impression that is a small part. And then there is the question what suffering is and whether it is “bad” or a problem to begin with (though most agree on that).
people (who have children) have not thought about this in detail,
Unsurprising as, having babies has always been and always will be (until/when/if uploading/bio-engineering) a normal thing of life. Normal is normal. People do think about how many children they would want to have, but rarely if.
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
Sure, some, but I don’t think it is as bad as you seem to think.
So I tried to design a clear scenario to understand some parameters driving the decisions.
And here I think things went wrong. I think the scenario wasn’t good. It was unrealistic—curring out too small a part of what you seem to be interested in.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
“But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out.”
No, maybe I was not clear enough: The scenario is just about something that I cannot influence to a relevant extent. It does not matter whether mankind together is theoretically able to mitigate the disaster, because that is not directly relevant to individual decisions about having children.
“And I guess I can work with the 1-p as it just means “stable comparable utility.”″
I am not sure whether I understand what you mean, but I just meant that it is a world where the world can develop into one of two directions, and you have subjective probabilities about it.
“What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children.”
Yes, technically you might be able to do something relevant, but this is not why I came up with the thought experiment. So you can just assume that “you” in this scenario will not be able to save the world.
“But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it.”
How would that affect the decision, in your opinion?
“Also, decision for children is a shared decision, so I additionally assume the partner would match this.”
Even if something is a shared decision, you can always first think about your own preferences.
“But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.”
Thanks for the precise answer!
I am surprised about the 2%. May I ask
what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
whether “would start to draw more capacity” implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child’s life?
Thank you for continuing to engage in earnest dialogue. I’m currently traveling, but you deserve an answer. I will reply later. Feel free to elaborate in the meantime.
what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
~~10%, mostly from AI. Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.
whether “would start to draw more capacity” implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child’s life?
Thanks. I don’t understand the sentence “Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.” Would you be willing to elaborate?
In your hypothetical scenario I had to come up with very specific worlds including a lot of suffering. In the “regular” existential risk (mostly from AI), that distribution is different, thus having babies is affected differently.
Yes, technically you might be able to do something relevant, but this is not why I came up with the thought experiment. So you can just assume that “you” in this scenario will not be able to save the world.
I think I don’t understand what you want to do with the thought experiment. In many cases, it is not needed to understand the purpose of a thought experiment and I think thought experiments are often useful. Including ones like this of putting it into a yes/no question. But this one is special.
I can not “just” “assume” that “I” will not be able to do something. These words mean something and it plays a role here. To assume means the I restrict all possible worlds to those where the scenario holds. That is not easy here—the hypotheical worlds are rare and difficult to get concrete enough to ask my intuition about, thus I cannot “just” do it. Asking me to “just” do it means to let my intuition use heuristics that are mostly based on our current world and will pattern match with things that probably do not correlate with what you are interested in. I’d rather refuse the question. I generally like answering polls and online questions. I think it is a good social habit to cultivate to source input about a wide range of topics. But I refuse to answer those that are ill-posed. I say: “Mu.”
I assume we are either lost in translation (which means I cannot phrase my thoughts clearly or am unable to put myself in your shoes) or you do not want to think about the question for some reason. I think I have to give up here. Nonetheless, thank you very much for the answer.
I guess “I don’t want to answer the question” is a decent summary. I spent so much time on answering why because I felt that there were too many assumption in the scenario.
“Also, decision for children is a shared decision, so I additionally assume the partner would match this.”
Even if something is a shared decision, you can always first think about your own preferences.
But my preferences are entangled with everybody else’s preferences. If you put me into hypothetical worlds where those people prefer differently, so do I.
People may disagree with this entanglement, but they are likely wrong. Out preferences are largely shaped by our upbringing, our peers, and society at large. And this is necessary: People can not grow up to functioning adults without this learning environment, as is proven, e.g., by feral children.
Of course, preferences are shaped by your social environment, but I assume that in any given situation you could still state a preference on the basis of which you would then enter into an exchange with the other relevant people?
In fact it’s hard to find probable worlds where having kids is a really bad idea, IMO.
One scenario where you might want to have kids in general, but not if timelines are short, is if you feel positive about having kids, but you view the first few years of having kids as a chore (ie, it costs you time, sleep, and money). So if you view kids as an investment of the form “take a hit to your happiness now, get more happiness back later”, then not having kids now seems justifiable. But I think that this sort of reasoning requires pretty short timelines (which I have), with high confidence (which I don’t have), and high confidence that the first few years of having kids is net-negative happiness for you (which I don’t have).
(But overall I endorse the claim that, mostly, if you would have otherwise wanted kids, you should still have them.)
My anecdotal evidence from relatives with toddlers is that the first few years of having your first child is indeed the most stressful experience of your life. I barely even meet them anymore, because all their free time is eaten by childcare. Not sure about happiness, but people who openly admit to regretting having their kids face huge social stigma, and I doubt you could get honest answer on that question.
I think you should have a kid if you would have wanted one without recent AI progress. Timelines are still very uncertain, and strong AGI could still be decades away. Parenthood is strongly value creating and extremely rewarding (if hard at times) and that’s true in many many worlds.
In fact it’s hard to find probable worlds where having kids is a really bad idea, IMO. If we solve alignment and end up in AI utopia, having kids is great! If we don’t solve alignment and EY is right about what happens in a fast takeoff world, it doesn’t really matter if you have kids or not.
In that sense, it’s basically a freeroll, though of course there are intermediate outcomes. I don’t immediately see any strong argument in favor of not having kids if you would otherwise want them.
This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don’t have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard “I really care about this person” one you could experience with just about anyone else.
+ the obvious fact that it might matter to the kid that they’re going to die
(edit: fwiw I broadly think people who want to have kids should have kids)
I’m sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they’re very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.
Which makes lots of sense to me: they seem quite happy, and it’s not surprising they would be opposed to never getting to exist even if it isn’t a full lifetime.
How would you expect the end of the world to take place if the AI doom scenarios turn out to be true?
I think the idea here was sort of “if the kid is unaware and death comes suddenly and swiftly they at least got a few years of life out of it”… cold as it sounds. But anyway this also assume the EY kind of FOOM scenario rather than one of the many others in which people are around, and the world just gets shittier and shittier.
It’s a pretty difficult topic to grasp with, especially given how much regret can come with not having had children in hindsight. Can’t say I have any answers for it. But it’s obviously not as simple as this answer makes it.
Yeah, but assuming your p(doom) isn’t really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don’t expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life—one that is better than it would be otherwise in part because she never has a job.
If your timelines are short-ish, you could likely have a child afterwards, because even if you’re a bit on the old side, hey, what, you don’t expect the ASI to find ways to improve health and fertility later in life?
I think the most important scenario to balance against is “nothing happens”, which is where you get shafted if you wait too long to have a child.
Could you please briefly describe the median future you expect?
I agree that it’s bad to raise a child in an environment of extreme anxiety. Don’t do that.
Also try to avoid being very doomy and anxious in general, it’s not a healthy state to be in. (Easier said than done, I realize.)
I don’t agree with that. I’m a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn’t do anything. If that’s the deal of life it’s a pretty good deal and I don’t think there’s any reason to be particularly anguished about it on your kid’s behalf.
Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)
Yes, I basically am not considering that because I am not aware of the arguments for why that’s a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.
I think the “stable totalitarianism” scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.
I mean this goes into the philosophical problem of whether it makes sense to compare utility of existent and virtual, non-existent agents but that would get long.
Having kids does mean less time to help AI go well, so maybe it’s not so much of a good idea if you’re one of the people doing alignment work.
This argument works against any thing you could do besides AI work and thus has to be considered in that wider frame. Going to the gym does mean less time for AI go well. Building a house. Watching Netflix. Some of these are longer time investments and some shorter, but the question still remains. Answer the question first how much effort you want to invest into AI go well vs. all other things you can do and then consider the fraction for children.
Perhaps people who can’t contribute to AI alignment directly could help indirectly by providing free babysitting for the people working on AI alignment?
strong AGI could still be decades away
Heh, that’s why I put “strong” in there!
I agree with this take. I already have four children, and I wouldn’t decide against children because of AI risks.
Did you take such things into account when you made the decision, or decisions?
Not AI risk specifically. But I had lengthy discussions with a friend about the general question of whether it is ethical to have children. The concerns in our discussions were overpopulation and how bad the world is in general and in Germany in particular. These much weaker concerns compared to extinction were enough for him not to have children. He also mentioned The Voluntary Human Extinction Movement. We still disagree on this. Mostly we disagree on how had and failed the world is. I think it is not worse than it has been most of the time since forever. Maybe because I perceive less suffering (in myself and in others) than he does. We also disagree on how to deal with overpopulation. Whether to take local population into account. Whether to weigh by consumption. Whether to see this as an individual obligation, or a collective one. Or as an obligation at all. Still, we are good friends. Maybe that tells you something.
So you thought that overpopulation is not much of a concern and the world is not so bad, right? But if you had thought that overpopulation (or something else) was a really strong problem and also would have had very bad effects on your children (for example, if you had expected with a high probability that their life would have been a permanent postapocalyptic fight for food), would that have affected your decision?
Very likely. People have, at different times and places, reacted to perceived overpopulation with reduced fertility—if needed by infanticide (Robin Hanson discusses this sometimes, e.g., here). Though what exactly was considered “too many children” was probably very different each time. It wasn’t always “bad effects on children”. A “permanent postapocalyptic fight for food” doesn’t seem like the strongest argument. People have lived in that state for most of humanity.
Sorry, I don’t fully understand the answer. 1) You think that you would have reacted to perceived overpopulation by having less children, but 2) at the same time, you think that expecting your children to have to live in a permanent postapocalyptic fight for food is not such a strong argument because that was normal in former times. But if the consideration in point 2 would not have mattered, why would the perception of overpopulation have mattered to you?
First, the local overpopulation has a much higher influence than the global overpopulation. I think it is arguable to do something about Korea’s shrinking population even if you are worried that there are not enough resources globally. In general, what is true for the average is not true for everybody.
Second, just because I, as an individual, (feel that I) can’t afford to have children, e.g., because society imposes costs on me to have children, that doesn’t mean that I wish such children to not exist (it might, but it’s a different claim).
Maybe I am not clear enough or we are talking past each other. So forget about the overpopulation. Suppose you lived in a universe where in the year in which you decided to have your first child some omniscient demon appeared and told you that
with probability p, some disaster happens 10 years later which causes dying of starvation for everybody with certainty (with all the societal and psychological side effects that such a famine implies),
with probability 1-p life in your country remains forever as it was in that year (it’s hypothetical, so plesae do not question whether that is possible).
So my questions:
Would there be some p where you would have decided not to have children?
How would the quality / kind of the disaster affect the decision?
How would the time horizon (10 years in my example) affect the decision?
Are there other societal or other global conditions where you think people should not have children?
I reject the usefulness of the thought experiment. In practice, there is almost always a possibility to affect the outcome. And then the outcome is also almost never absolute (“starve with certainty”). And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children. Thus, I wonder what you want to gain from this thought experiment? What is your crux?
But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out. So aliens have sent us a message that they plan to do that and have proven that they have the technological means, and because of the means of transportation/communication we can’t send them a message back to convince them otherwise or something. And I guess I can work with the 1-p as it just means “stable comparable utility.”
What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children. But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it. Also, decision for children is a shared decision, so I additionally assume the partner would match this. But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.
First of all, thanks for the detailed answer. I do not fully understand your position here, but the clarity of the answer to the thought experiment was helpful.
You reject the usefulness of the thought experiment, but I do not really understand why. Your reasons are that “in practice, there is almost always a possibility to affect the outcome” and that “the outcome is also almost never absolute”. With respect to the possibility to affect the outcome, I would say that I, as an individual, have to take most global situations as given. With respect to whether the outcome is “absolute”, you seem to mean that it is not a certain outcome or that not literally everybody would die. If it is just about the certainty, well, I included the subjective probability in the thought experiment. If it is about whether everybody dies, of course you can think of any probability distribution of outcomes, but what is gained by that? Then you say: “And on top of that, my presumed inability to influence outcomes somehow also doesn’t influence by interest in wanting to have children.” I do not really understand that sentence. Do you imply that powerful people naturally have a different amount of interest in wanting to have children? If so, why does that matter for the decision in the thought experiment?
You ask what I want to gain from this thought experiment.
Following lesswrong or EA community discussions about decisions about having children, I get the impression that the factors that influence the decision seem to be:
potentially reduced productivity (less time and energy for saving the world?),
immediate happiness / stress effect on the parents.
However, the ethics of bringing children into the world seem to be touched only superficially. This seems strange to me for a community in which thinking about ethics and thinking about the future are seen as valuable. @Julia Wise, writing about “Raising children on the eve of AI” says: “This is all assuming that the worst case is death rather than some kind of dystopia or torture scenario. Maybe unsurprisingly, I haven’t properly thought through the population ethics there. I find that very difficult to think about, and if you’re on the fence you should think more about it.” At the same time, the median community member’s expectation about the future seems very gloomy to me (though there are also people who seem very excited about a future of mind uploading, turning the world into a holodeck, or whatever).
I am confused about this attitude, and I try to determine whether
I just do not understand whether people on lesswrong expect the future to be bad or good,
people think even in case of a disaster with relevant likelihood, the future will definitely not include suffering that could outweigh some years of happiness,
people (who have children) have not thought about this in detail,
people do not think that any of this matters for some reason I overlook,
people tend to be taken in by motivated reasoning,
or something else.
So I tried to design a clear scenario to understand some parameters driving the decisions.
Why did I ask you about it? You have four children, you take part in discussions about the topic, you also write about alignment / AI risk.
I agree that, as an individual, one cannot affect most outcomes significantly. But if everybody assumes everybody does that too, then nobody does anything and thus definitely nothing happens/is done. Everybody contributes small parts, but those aggregate to change, because somebody will be at the right place at the right time to do something or ask the right question or bring the right people together etc. By ruling out the possibility you take this effect away and I have to price that into my model. If you or society wants to achieve something, you have to convice large numbers that change is possible and that it is important that everybody contributes. In management, that is called “building momentum.”
You only added a binary probability between two options keeping both individually rigid. It would have worked better to provide distributions for number of people suffering or the effectiveness of influence etc. - but because I didn’t know you intention of the though experiment I couldn’t just assume those.
No, I don’t want to make that specific implication. Maybe powerful people have a different interest in having children, but I don’t know those forces and would’t make a confident prediction either way.
But if I personally can’t influence results, I have to make assumptions as to why I can’t. Maybe I’m sick, maybe I’m legally limited in some way in your hypothetical. Such reasons would surely influence desire to have children.
I think that there are many more reasons than this including the ecological footprint of a child, personal reasons, general ethical reasons, and others. But I agree that there is no coherent picture. The community hasn’t come to terms with this and this is more a market place of EA/LW flavioured ideas. What else do you expect of a young and preparadigmatic field. People try to think hard about it, but it is, well, hard.
More bad than good, I guess. But it is a distribution as you can look up on Metaculus.
Some will think it and be worried. That’s what the s-risk sub-community is about, but I get the impression that is a small part. And then there is the question what suffering is and whether it is “bad” or a problem to begin with (though most agree on that).
Unsurprising as, having babies has always been and always will be (until/when/if uploading/bio-engineering) a normal thing of life. Normal is normal. People do think about how many children they would want to have, but rarely if.
Sure, some, but I don’t think it is as bad as you seem to think.
And here I think things went wrong. I think the scenario wasn’t good. It was unrealistic—curring out too small a part of what you seem to be interested in.
Thank you.
About your reaction to the thought experiment:
“But fine, let’s come up with a scenario that might fit the bill: It must be something that can’t be influenced, so natural causes are out.”
No, maybe I was not clear enough: The scenario is just about something that I cannot influence to a relevant extent. It does not matter whether mankind together is theoretically able to mitigate the disaster, because that is not directly relevant to individual decisions about having children.
“And I guess I can work with the 1-p as it just means “stable comparable utility.”″
I am not sure whether I understand what you mean, but I just meant that it is a world where the world can develop into one of two directions, and you have subjective probabilities about it.
“What would I do? In this case, how we lived might still leave a testimonial of our life that the aliens might care about. Did we give up? Did we care about our families? Should we try to warn other civilizations about these aliens? So in this scenario, there is still a balance between what I could do, if not against it, then still about it. This trades against having children.”
Yes, technically you might be able to do something relevant, but this is not why I came up with the thought experiment. So you can just assume that “you” in this scenario will not be able to save the world.
“But note that simply posing the scenario with a non-zero p slightly alters the decision for children: The presence of aliens would alter society and people might want to do something about it.”
How would that affect the decision, in your opinion?
“Also, decision for children is a shared decision, so I additionally assume the partner would match this.”
Even if something is a shared decision, you can always first think about your own preferences.
“But then it still depends on life circumstances. So I guess at p=~2% it would very slightly reduce the decision for children. From p=~25% it would start to draw more capacity. At =~80% there wouldn’t be much left to have children.”
Thanks for the precise answer!
I am surprised about the 2%. May I ask
what your expectations about global catastrophic risks are for then next decades? (No extremely precise answer necessary.)
whether “would start to draw more capacity” implies that the whole expectation would only affect your decisions because you believe you would invest your time into saving the world, but not because the effect of the expected future development on your (hypothetical) child’s life?
Thank you for continuing to engage in earnest dialogue. I’m currently traveling, but you deserve an answer. I will reply later. Feel free to elaborate in the meantime.
While I don’t have much to elaborate, maybe the following headline captures the relevant mood: https://unchartedterritories.tomaspueyo.com/p/what-would-you-do-if-you-had-8-years
This comment is just to note that I’d still be happy about an answer.
~~10%, mostly from AI. Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.
Not only, but mostly, yes.
Thanks. I don’t understand the sentence “Note that my comments about my responses to this probability are different from actual responses to having a baby because the scenrio is very differenz.” Would you be willing to elaborate?
In your hypothetical scenario I had to come up with very specific worlds including a lot of suffering. In the “regular” existential risk (mostly from AI), that distribution is different, thus having babies is affected differently.
Thanks for the reminder.
I think I don’t understand what you want to do with the thought experiment. In many cases, it is not needed to understand the purpose of a thought experiment and I think thought experiments are often useful. Including ones like this of putting it into a yes/no question. But this one is special.
I can not “just” “assume” that “I” will not be able to do something. These words mean something and it plays a role here. To assume means the I restrict all possible worlds to those where the scenario holds. That is not easy here—the hypotheical worlds are rare and difficult to get concrete enough to ask my intuition about, thus I cannot “just” do it. Asking me to “just” do it means to let my intuition use heuristics that are mostly based on our current world and will pattern match with things that probably do not correlate with what you are interested in. I’d rather refuse the question. I generally like answering polls and online questions. I think it is a good social habit to cultivate to source input about a wide range of topics. But I refuse to answer those that are ill-posed. I say: “Mu.”
I assume we are either lost in translation (which means I cannot phrase my thoughts clearly or am unable to put myself in your shoes) or you do not want to think about the question for some reason. I think I have to give up here. Nonetheless, thank you very much for the answer.
I guess “I don’t want to answer the question” is a decent summary. I spent so much time on answering why because I felt that there were too many assumption in the scenario.
But my preferences are entangled with everybody else’s preferences. If you put me into hypothetical worlds where those people prefer differently, so do I.
People may disagree with this entanglement, but they are likely wrong. Out preferences are largely shaped by our upbringing, our peers, and society at large. And this is necessary: People can not grow up to functioning adults without this learning environment, as is proven, e.g., by feral children.
Of course, preferences are shaped by your social environment, but I assume that in any given situation you could still state a preference on the basis of which you would then enter into an exchange with the other relevant people?
Yes, that would be possible, but it would lead to different results.
One scenario where you might want to have kids in general, but not if timelines are short, is if you feel positive about having kids, but you view the first few years of having kids as a chore (ie, it costs you time, sleep, and money). So if you view kids as an investment of the form “take a hit to your happiness now, get more happiness back later”, then not having kids now seems justifiable. But I think that this sort of reasoning requires pretty short timelines (which I have), with high confidence (which I don’t have), and high confidence that the first few years of having kids is net-negative happiness for you (which I don’t have).
(But overall I endorse the claim that, mostly, if you would have otherwise wanted kids, you should still have them.)
My anecdotal evidence from relatives with toddlers is that the first few years of having your first child is indeed the most stressful experience of your life. I barely even meet them anymore, because all their free time is eaten by childcare. Not sure about happiness, but people who openly admit to regretting having their kids face huge social stigma, and I doubt you could get honest answer on that question.