I couldn’t easily remember this, and then tried throwing it into our beta-testing LessWrong-contexted-LLM. (I’m interested in whether the following turned out to be helpful)
Eliezer Yudkowsky offers an interesting perspective in his post For The People Who Are Still Alive. He argues that in a “Big World” scenario (where the universe is vast or infinite), we should focus more on the welfare of existing people rather than creating new ones. He states:
It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.
In a similar vein, Wei Dai’s post The Moral Status of Independent Identical Copies explores related issues. While not directly about longevity, it addresses questions about how we should value additional copies of existing people versus new people. This has implications for how we might think about extending lives versus creating new ones.
I couldn’t easily remember this, and then tried throwing it into our beta-testing LessWrong-contexted-LLM. (I’m interested in whether the following turned out to be helpful)
(it said more stuff but much of it seemed less relevant)
It pulled in these posts as potentially relevant (some of this doesn’t seem like what you meant but filtering it manually didn’t feel worth it).
Thank you! Seems like this bot works quite well for this task