If we don’t solve alignment and EY is right about what happens in a fast takeoff world, it doesn’t really matter if you have kids or not.
This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don’t have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard “I really care about this person” one you could experience with just about anyone else.
I’m sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they’re very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.
Which makes lots of sense to me: they seem quite happy, and it’s not surprising they would be opposed to never getting to exist even if it isn’t a full lifetime.
I think the idea here was sort of “if the kid is unaware and death comes suddenly and swiftly they at least got a few years of life out of it”… cold as it sounds. But anyway this also assume the EY kind of FOOM scenario rather than one of the many others in which people are around, and the world just gets shittier and shittier.
It’s a pretty difficult topic to grasp with, especially given how much regret can come with not having had children in hindsight. Can’t say I have any answers for it. But it’s obviously not as simple as this answer makes it.
Yeah, but assuming your p(doom) isn’t really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don’t expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life—one that is better than it would be otherwise in part because she never has a job.
If your timelines are short-ish, you could likely have a child afterwards, because even if you’re a bit on the old side, hey, what, you don’t expect the ASI to find ways to improve health and fertility later in life?
I think the most important scenario to balance against is “nothing happens”, which is where you get shafted if you wait too long to have a child.
I don’t agree with that. I’m a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn’t do anything. If that’s the deal of life it’s a pretty good deal and I don’t think there’s any reason to be particularly anguished about it on your kid’s behalf.
Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)
Yes, I basically am not considering that because I am not aware of the arguments for why that’s a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.
I think the “stable totalitarianism” scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.
I mean this goes into the philosophical problem of whether it makes sense to compare utility of existent and virtual, non-existent agents but that would get long.
This IMO misses the obvious fact that you spend your life with a lot more anguish if you think that not just you, but your kid is going to die too. I don’t have a kid but everyone who does seems to describe a feeling of protectiveness that transcends any standard “I really care about this person” one you could experience with just about anyone else.
+ the obvious fact that it might matter to the kid that they’re going to die
(edit: fwiw I broadly think people who want to have kids should have kids)
I’m sure this varies by kid, but I just asked my two older kids, age 9 and 7, and they both said they’re very glad that we decided to have them even if the world ends and everyone dies at some point in the next few years.
Which makes lots of sense to me: they seem quite happy, and it’s not surprising they would be opposed to never getting to exist even if it isn’t a full lifetime.
How would you expect the end of the world to take place if the AI doom scenarios turn out to be true?
I think the idea here was sort of “if the kid is unaware and death comes suddenly and swiftly they at least got a few years of life out of it”… cold as it sounds. But anyway this also assume the EY kind of FOOM scenario rather than one of the many others in which people are around, and the world just gets shittier and shittier.
It’s a pretty difficult topic to grasp with, especially given how much regret can come with not having had children in hindsight. Can’t say I have any answers for it. But it’s obviously not as simple as this answer makes it.
Yeah, but assuming your p(doom) isn’t really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.
I don’t expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life—one that is better than it would be otherwise in part because she never has a job.
If your timelines are short-ish, you could likely have a child afterwards, because even if you’re a bit on the old side, hey, what, you don’t expect the ASI to find ways to improve health and fertility later in life?
I think the most important scenario to balance against is “nothing happens”, which is where you get shafted if you wait too long to have a child.
Could you please briefly describe the median future you expect?
I agree that it’s bad to raise a child in an environment of extreme anxiety. Don’t do that.
Also try to avoid being very doomy and anxious in general, it’s not a healthy state to be in. (Easier said than done, I realize.)
I don’t agree with that. I’m a parent of a 4-year-old who takes AI risk seriously. I think childhood is great in and of itself, and if the fate of my kid is to live until 20 and then experience some unthinkable AI apocalypse, that was 20 more good years of life than he would have had if I didn’t do anything. If that’s the deal of life it’s a pretty good deal and I don’t think there’s any reason to be particularly anguished about it on your kid’s behalf.
Do you think there could be an amount of suffering at the end of of a life that would outweigh 20 good years? (Including that this end could take very long.)
Yes, I basically am not considering that because I am not aware of the arguments for why that’s a likely kind of risk (vs. the risk of simple annihilation, which I understand the basic arguments for.) If you think the future will be super miserable rather than simply nonexistent, then I understand why you might not have a kid.
I think the “stable totalitarianism” scenario is less science-fiction than the annihilation scenario, because you only need an extremely totalitarian state (something that already exists or existed) enhanced by AI. It is possible that this would come along with random torture. This would be possible with a misguided AI as well.
I mean this goes into the philosophical problem of whether it makes sense to compare utility of existent and virtual, non-existent agents but that would get long.