Is this comment a question whether these assumptions are in my post?
Have kids when we can have AGI in 10-25 years is good and not, actually, very evil OMG what are you doing.
You don’t buy that having kids [when we can have AGI soon...] is good, right? OK, I disagree with that strongly, population-wise. Do you imply that the whole planet should stop having kids because we are approaching AGI? Seems like a surefire way to wreck the civilisation even if the AI alignment problem will appear simpler than we think or miraculously solved. Will MacAskill also argues against this position in “What We Owe The Future”.
Specifically for people who directly work in AI safety (or have intellectual capacity to meaningfully contribute to AI safety and other urgent x-risk priorities, and consider doing so), this is a less clear-cut case, I agree. This is one of the reasons for which I’m personally unsure whether I should have kids.
Right social incentives can’t make A LOT of people poly pretty fast.
There was no such assumption. The young man in my “mainline scenario” doesn’t have this choice, alas. He has no romantic relationships with humans at all. Also, I’m afraid that AI partners will soon become sufficiently compelling that after prolong relationships with them, it will be hard for people to summon motivation to court average-looking (at best), shallow, and dull humans who don’t seem to be interested in this relationship themselves.
Is this comment a question whether these assumptions are in my post?
You don’t buy that having kids [when we can have AGI soon...] is good, right? OK, I disagree with that strongly, population-wise. Do you imply that the whole planet should stop having kids because we are approaching AGI? Seems like a surefire way to wreck the civilisation even if the AI alignment problem will appear simpler than we think or miraculously solved. Will MacAskill also argues against this position in “What We Owe The Future”.
Specifically for people who directly work in AI safety (or have intellectual capacity to meaningfully contribute to AI safety and other urgent x-risk priorities, and consider doing so), this is a less clear-cut case, I agree. This is one of the reasons for which I’m personally unsure whether I should have kids.
There was no such assumption. The young man in my “mainline scenario” doesn’t have this choice, alas. He has no romantic relationships with humans at all. Also, I’m afraid that AI partners will soon become sufficiently compelling that after prolong relationships with them, it will be hard for people to summon motivation to court average-looking (at best), shallow, and dull humans who don’t seem to be interested in this relationship themselves.