My feeling is that in most AI-kills-us-all scenarios, the AI kills us all quickly.
Will it be my fault for having brought them into the world while knowing this would happen?
You don’t know that this will happen, so no. Arguably it will be your fault for having brought them into the world while knowing it might happen—but we all already know that if we have children they are likely to die eventually, and that they might suffer any quantity of the slings and arrows of outrageous fortune. Those of us who have children generally either haven’t tried to weigh the good against the bad or else have decided that the good outweighs the bad; it is not obvious to me that the risk of AI catastrophe makes a big difference to that calculation, but obviously what you think about that will depend on how likely you think the various possible kinds of catastrophe are.
Even if I think we’ll all die painlessly, how can I look at my children and not already be mourning their death from day 1?
I’m sorry to be the bearer of bad news, but any children you have will most likely die anyway in the end, AI or no AI. When I look at my daughter, or at anyone else I care about, I am not “already mourning their death” (unless maybe they are terminally ill) because, well, why should I be? There’s plenty else about them to celebrate, plenty of things to pay attention to in the moment; why should their death be what I focus on, if it isn’t imminent and if I can’t do much about it?
Even if my children’s short lives are happy, wouldn’t their happiness be fundamentally false and devoid of meaning?
Any notion of “meaning” that tells you that no one’s happiness should be celebrated needs throwing out and replacing with something better.
A happy child is a happy child. Their happiness makes the world brighter. If they are in fact inevitably going to be dead five years from now, that is a sad fact but it doesn’t nullify the value of their happiness now.
A question you haven’t (explicitly) asked: Suppose you refrain from having children now, out of fear of AI catastrophe, and suppose that it turns out that there is no AI catastrophe in the near future. How would you feel about that?
I don’t want to claim that you should definitely have children. Maybe you shouldn’t. That depends (among other things) on how likely you actually think AI catastrophe is, and how you expect it to unfold if it happens. But I do think that, AI or no AI, catastrophe or no catastrophe, children or no children, you will likely be both happier and more effective in whatever you do if you are able to get past that sense of doom and distress.
My feeling is that in most AI-kills-us-all scenarios, the AI kills us all quickly.
You don’t know that this will happen, so no. Arguably it will be your fault for having brought them into the world while knowing it might happen—but we all already know that if we have children they are likely to die eventually, and that they might suffer any quantity of the slings and arrows of outrageous fortune. Those of us who have children generally either haven’t tried to weigh the good against the bad or else have decided that the good outweighs the bad; it is not obvious to me that the risk of AI catastrophe makes a big difference to that calculation, but obviously what you think about that will depend on how likely you think the various possible kinds of catastrophe are.
I’m sorry to be the bearer of bad news, but any children you have will most likely die anyway in the end, AI or no AI. When I look at my daughter, or at anyone else I care about, I am not “already mourning their death” (unless maybe they are terminally ill) because, well, why should I be? There’s plenty else about them to celebrate, plenty of things to pay attention to in the moment; why should their death be what I focus on, if it isn’t imminent and if I can’t do much about it?
Any notion of “meaning” that tells you that no one’s happiness should be celebrated needs throwing out and replacing with something better.
A happy child is a happy child. Their happiness makes the world brighter. If they are in fact inevitably going to be dead five years from now, that is a sad fact but it doesn’t nullify the value of their happiness now.
A question you haven’t (explicitly) asked: Suppose you refrain from having children now, out of fear of AI catastrophe, and suppose that it turns out that there is no AI catastrophe in the near future. How would you feel about that?
I don’t want to claim that you should definitely have children. Maybe you shouldn’t. That depends (among other things) on how likely you actually think AI catastrophe is, and how you expect it to unfold if it happens. But I do think that, AI or no AI, catastrophe or no catastrophe, children or no children, you will likely be both happier and more effective in whatever you do if you are able to get past that sense of doom and distress.