I am in bad shape but I will try to answer. I was interested in responding to your question but wanted more context. And it seems from what you say, that the issue is not at all about whether humanity or individual humans live forever, but whether one should be attached to their existence, or whether the pursuit of some EA-style greater good would always favor their existence.
I mean, your friend seems to be suggesting a blase attitude towards the prospect of human extinction via AI. Such an attitude has some precedent in the stoic attitude towards individual death. One aims to be unaffected by it, by telling oneself that this is the natural order. With humanity menaced by extinction that one may be helpless to prevent, an individual might analogously find some peace by telling themselves it’s all just nature.
But someone once observed that people who aim to be philosophically accepting about the death of humanity, would often be far more concerned about the death of individuals, for example children. If this is the case, it suggests that they haven’t really grasped what human extinction means.
There is much more that can be said about all these issues, but that’s the best that I can do for now.
I do not think they meant anything AI specific, just general existence about humanity vs other species.
The question was not about whether humanity live forever, the original prompt is “why must human/humanity live/continue forever?”, which is in the original question.
Do not feel the need to reply in anyway; I appreciate you try/feel the need to reply quickly, but nothing here is urgent. (Not sure why you would reply in bad shape or mention that; initially thought it is related to the topic).
I am in bad shape but I will try to answer. I was interested in responding to your question but wanted more context. And it seems from what you say, that the issue is not at all about whether humanity or individual humans live forever, but whether one should be attached to their existence, or whether the pursuit of some EA-style greater good would always favor their existence.
I mean, your friend seems to be suggesting a blase attitude towards the prospect of human extinction via AI. Such an attitude has some precedent in the stoic attitude towards individual death. One aims to be unaffected by it, by telling oneself that this is the natural order. With humanity menaced by extinction that one may be helpless to prevent, an individual might analogously find some peace by telling themselves it’s all just nature.
But someone once observed that people who aim to be philosophically accepting about the death of humanity, would often be far more concerned about the death of individuals, for example children. If this is the case, it suggests that they haven’t really grasped what human extinction means.
There is much more that can be said about all these issues, but that’s the best that I can do for now.
I do not think they meant anything AI specific, just general existence about humanity vs other species.
The question was not about whether humanity live forever, the original prompt is “why must human/humanity live/continue forever?”, which is in the original question.
Do not feel the need to reply in anyway; I appreciate you try/feel the need to reply quickly, but nothing here is urgent. (Not sure why you would reply in bad shape or mention that; initially thought it is related to the topic).