To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you’re not a terrible parent) potentially a much higher expected moral utility than almost anything else.
The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.
From the your hypothetical children’s perspective this scenario is also disproportionately one-sidedly positive. If AI isn’t aligned it probably kills people pretty quickly, such that they still would have had a better overall life than most people in history.
Now it’s important to consider that the upside for anyone alive when AI is successfully aligned is so high it totally breaks moral philosophies like negative utilitarianism. Since the suffering of a single immortal’s minor inconveniences (provided you agree that some minor suffering being included increases total net utility) would likely eventually outweigh all human suffering pre-singularity. By virtue of both staggering amounts of subjective experience and potentially much higher pain tolerances among post-humans.
Of course if AI is aligned you can probably have kids afterwards, though I think scenarios where a mostly benevolent AI decides to seriously limit who can have kids are somewhat likely. Waiting to have kids until after a singularity is strictly worse however than having them both before and after, as well as missing out on astronomical amounts of moral utility by not impacting the likelihood of a good singularity outcome.
To argue the pro-natalist position here, I think the facts being considered should actually give having kids (if you’re not a terrible parent) potentially a much higher expected moral utility than almost anything else.
The strongest argument for having kids is that the influence they may have on the world (say most obviously by voting on hypothetical future AI policy) even if marginal (which it may not be if you have extremely successful children) becomes unfathomably large when multiplied by the potential outcomes.
From the your hypothetical children’s perspective this scenario is also disproportionately one-sidedly positive. If AI isn’t aligned it probably kills people pretty quickly, such that they still would have had a better overall life than most people in history.
Now it’s important to consider that the upside for anyone alive when AI is successfully aligned is so high it totally breaks moral philosophies like negative utilitarianism. Since the suffering of a single immortal’s minor inconveniences (provided you agree that some minor suffering being included increases total net utility) would likely eventually outweigh all human suffering pre-singularity. By virtue of both staggering amounts of subjective experience and potentially much higher pain tolerances among post-humans.
Of course if AI is aligned you can probably have kids afterwards, though I think scenarios where a mostly benevolent AI decides to seriously limit who can have kids are somewhat likely. Waiting to have kids until after a singularity is strictly worse however than having them both before and after, as well as missing out on astronomical amounts of moral utility by not impacting the likelihood of a good singularity outcome.