Scott Alexander talks about how everything will change completely in this post, and then says “There’s some chance I’m wrong about a singularity, there’s some chance we make it through the singularity, and if I’m wrong about both those things I’d rather give my kid 30 years of life than none at all. Nobody gets more than about 100 anyway and 30 and 100 aren’t that different in the grand scheme of things. I’d feel an obligation not to bring kids into a world that would have too much suffering but I think if we die from technological singularity it will be pretty quick. I don’t plan on committing suicide to escape and I don’t see why I should be not bringing life into the world either.”. I have never seen any convincing argument why “if we die from technological singularity it will” have to “be pretty quick”.
Will MacAskill says that “conditional on misaligned takeover, I think like 50⁄50 chance that involves literally killing human beings, rather than just disempowering them”, but “just” being disempowered does not seem like a great alternative, and I do not know why the AI would care for disempowered humans in a good way.
It seems to me that the world into which children are born today has a high likelihood of being really bad. Is it still a good idea to have children, taking their perspective into account and not just treating them as fulfilling the somehow hard-wired preferences of the parents?
I am currently not only confused, but quite gloomy, and would be grateful for your opinions. Optimistic ones are welcome, but being realistic is more important.
[Question] Should you refrain from having children because of the risk posed by artificial intelligence?
Eli Lifland discusses AI risk probabilities here.
Scott Alexander talks about how everything will change completely in this post, and then says “There’s some chance I’m wrong about a singularity, there’s some chance we make it through the singularity, and if I’m wrong about both those things I’d rather give my kid 30 years of life than none at all. Nobody gets more than about 100 anyway and 30 and 100 aren’t that different in the grand scheme of things. I’d feel an obligation not to bring kids into a world that would have too much suffering but I think if we die from technological singularity it will be pretty quick. I don’t plan on committing suicide to escape and I don’t see why I should be not bringing life into the world either.”. I have never seen any convincing argument why “if we die from technological singularity it will” have to “be pretty quick”.
Will MacAskill says that “conditional on misaligned takeover, I think like 50⁄50 chance that involves literally killing human beings, rather than just disempowering them”, but “just” being disempowered does not seem like a great alternative, and I do not know why the AI would care for disempowered humans in a good way.
It seems to me that the world into which children are born today has a high likelihood of being really bad. Is it still a good idea to have children, taking their perspective into account and not just treating them as fulfilling the somehow hard-wired preferences of the parents?
I am currently not only confused, but quite gloomy, and would be grateful for your opinions. Optimistic ones are welcome, but being realistic is more important.