...how many people do you know who could honestly claim, “I wish I had never been born?”
I personally know a few who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time between 10^20 and 10^100 where possible trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer’s, just much worse and longer, without any hope.
I agree, that sounds very depressing. However, I don’t understand the minds, emotions, or culture of the entities that will exist then, and as such, I don’t think it’s ethical for me to decide in advance how bad it is. We don’t kill seniors with Alzheimer’s, because it’s not up to us to judge whether their life is worth living or not.
Plus, I just don’t see the point in making a binding decision now about potential suffering in the far future, when we could make it N years from now. I don’t see how suicide would be harder later, if it turns out to be actually rational (as long as we aim to maintain freedom.)
To pull the plug later could (1) be impossible, (2) result in more death than it would now.
However, I agree with you. It was not my intention to suggest we should abord humanity but rather to inquire about the similarities to the abortion of a fetus that is predicted to suffer from severe disabilities in its possible future life.
Further, my intention was to inquire about the perception that it is our moral responsibility to minimize suffering. If we cannot minimize it by actively shaping the universe, but rather risk to increase it, what should we do about it?
I don’t really understand your greater argument. Inaction (e.g. sitting on Earth, not pursuing AI, not pursuing growth) is not morally neutral. By failing to act, we’re risking suffering in various ways; insufficiency of resources on the planet, political and social problems, or a Singularity perpetrated by actors who are not acting in the interest of humanity’s values. All of these could potentially result in the non-existence of all the future actors we’re discussing. That’s got to be first and foremost in any discussion of our moral responsibility toward them.
We can’t opt out of shaping the universe, so we ought to do a good a job as we can as per our values. The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
The argument is about action. We forbid inbreeding because it causes suffering in future generations. Now if there is no way that the larger future could be desirable, i.e. if suffering is prevailing, then I ask how many entities have to suffer to forbid humanity to seed the universe? What is your expected number of entities born after 10^20 years who’ll face a increasing lack of resources until the end at around 10^100 years? All of them are doomed to face a future that might be shocking and undesirable. This is not a small part but most of it.
The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
But what is there that speaks for our future ability to stop entropy?
If we can’t stop entropy, then we can’t stop entropy, but I still don’t see why our descendants should be less able to deal with this fact than we are. We appreciate living regardless, and so may they.
Surely posthuman entities living at the 10^20 year mark can figure out much more accurately than us whether it’s ethical to continue to grow and/or have children at that point.
As far as I can tell, the single real doomsday scenario here is, what if posthumans are no longer free to commit suicide, but they nevertheless continue to breed; heat death is inevitable, and life in a world with ever-decreasing resources is a fate worse than death. That would be pretty bad, but the first and last seem to me unlikely enough, and all four conditions are inscrutable enough from our limited perspective that I don’t see a present concern.
I personally know a few who suffer from severe disabilities and who do not enjoy life. But this is nothing compared to the time between 10^20 and 10^100 where possible trillions of God-like entities will be slowly disabled due to a increasing lack of resources. This is comparable to suffering from Alzheimer’s, just much worse and longer, without any hope.
I agree, that sounds very depressing. However, I don’t understand the minds, emotions, or culture of the entities that will exist then, and as such, I don’t think it’s ethical for me to decide in advance how bad it is. We don’t kill seniors with Alzheimer’s, because it’s not up to us to judge whether their life is worth living or not.
Plus, I just don’t see the point in making a binding decision now about potential suffering in the far future, when we could make it N years from now. I don’t see how suicide would be harder later, if it turns out to be actually rational (as long as we aim to maintain freedom.)
To pull the plug later could (1) be impossible, (2) result in more death than it would now.
However, I agree with you. It was not my intention to suggest we should abord humanity but rather to inquire about the similarities to the abortion of a fetus that is predicted to suffer from severe disabilities in its possible future life.
Further, my intention was to inquire about the perception that it is our moral responsibility to minimize suffering. If we cannot minimize it by actively shaping the universe, but rather risk to increase it, what should we do about it?
I don’t really understand your greater argument. Inaction (e.g. sitting on Earth, not pursuing AI, not pursuing growth) is not morally neutral. By failing to act, we’re risking suffering in various ways; insufficiency of resources on the planet, political and social problems, or a Singularity perpetrated by actors who are not acting in the interest of humanity’s values. All of these could potentially result in the non-existence of all the future actors we’re discussing. That’s got to be first and foremost in any discussion of our moral responsibility toward them.
We can’t opt out of shaping the universe, so we ought to do a good a job as we can as per our values. The more powerful humanity is, the more options are open to us, and the better for our descendants to re-evaluate our choices and further steer our future.
The argument is about action. We forbid inbreeding because it causes suffering in future generations. Now if there is no way that the larger future could be desirable, i.e. if suffering is prevailing, then I ask how many entities have to suffer to forbid humanity to seed the universe? What is your expected number of entities born after 10^20 years who’ll face a increasing lack of resources until the end at around 10^100 years? All of them are doomed to face a future that might be shocking and undesirable. This is not a small part but most of it.
But what is there that speaks for our future ability to stop entropy?
If we can’t stop entropy, then we can’t stop entropy, but I still don’t see why our descendants should be less able to deal with this fact than we are. We appreciate living regardless, and so may they.
Surely posthuman entities living at the 10^20 year mark can figure out much more accurately than us whether it’s ethical to continue to grow and/or have children at that point.
As far as I can tell, the single real doomsday scenario here is, what if posthumans are no longer free to commit suicide, but they nevertheless continue to breed; heat death is inevitable, and life in a world with ever-decreasing resources is a fate worse than death. That would be pretty bad, but the first and last seem to me unlikely enough, and all four conditions are inscrutable enough from our limited perspective that I don’t see a present concern.