Thanks for reading and for this very reasonable and relevant question! My overall primary concern is, indeed, (ethical/effectively altruistic) anti-natalism in general. I do think, though, that the specific justification for reproducing life which is that we need brains in order to do intelligence augmentation/amplification research on is especially (frankly, no undue offense intended) insane. I’m posting this essay on Less Wrong because Yudkowsky maintains an underlying transhumanist/cryogenics/intelligence augmentation/amplification presupposition that I think is, firstly, popular with members of this forum, and, more importantly, demonstrably irrational (despite Eliezer’s being an obvious badass rationalist in so many other areas of his thought—AI-misalignment being the big one). That is, this (childbirth = good because intelligence augmentation = good, and reproduction means more—raw, at least—intelligence) is a particularly illustrative example of optimism-bias (all cognitive biases are members of the set “optimism bias”, in that they all assume a false positive—hence my “pessimism” is, still, as I see it, a “realism”). So yeah, antinatalism in general is my main pragmatic message (provided “antinatalism” includes not only no-reproduction but also prioritizing caring for existing children above all else, including the issue of no-reproduction, even), and I’m drawing attention to this specific example of pro-natalist/vitalist thinking, here on Less Wrong, because I think this is a useful introduction to the overall issue of stop-reproducing-for-g-d’s-sake, and because I feel this is the community most likely to take this seriously and examine the cognitive bias at play here. Hopefully that might include Yudkowsky himself, but I’m grateful to have conversations about this with anyone capable of approaching it rationally, which includes you, so, again, thank you!
Thanks for reading and for this very reasonable and relevant question! My overall primary concern is, indeed, (ethical/effectively altruistic) anti-natalism in general. I do think, though, that the specific justification for reproducing life which is that we need brains in order to do intelligence augmentation/amplification research on is especially (frankly, no undue offense intended) insane. I’m posting this essay on Less Wrong because Yudkowsky maintains an underlying transhumanist/cryogenics/intelligence augmentation/amplification presupposition that I think is, firstly, popular with members of this forum, and, more importantly, demonstrably irrational (despite Eliezer’s being an obvious badass rationalist in so many other areas of his thought—AI-misalignment being the big one). That is, this (childbirth = good because intelligence augmentation = good, and reproduction means more—raw, at least—intelligence) is a particularly illustrative example of optimism-bias (all cognitive biases are members of the set “optimism bias”, in that they all assume a false positive—hence my “pessimism” is, still, as I see it, a “realism”). So yeah, antinatalism in general is my main pragmatic message (provided “antinatalism” includes not only no-reproduction but also prioritizing caring for existing children above all else, including the issue of no-reproduction, even), and I’m drawing attention to this specific example of pro-natalist/vitalist thinking, here on Less Wrong, because I think this is a useful introduction to the overall issue of stop-reproducing-for-g-d’s-sake, and because I feel this is the community most likely to take this seriously and examine the cognitive bias at play here. Hopefully that might include Yudkowsky himself, but I’m grateful to have conversations about this with anyone capable of approaching it rationally, which includes you, so, again, thank you!